Sample records for method validation results

  1. Task Validation for the AN/TPQ-36 Radar System

    DTIC Science & Technology

    1978-09-01

    report presents the method and results of a study to validate personnel task descriptions for the new AN/TyP-Jb radar...TP.J-Sb KAPAK SVSTKM CONTENTS i ■ l.t |i- INTRODUCTION t METHOD 2 RESULTS, CONCLUSIONS, AND RECOMMENDATIONS b Task Validation 5 26B MOS... method , results, conclusions, and recommendations of the validation study. The appendixes contain the following: 1. Appendix A contains

  2. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  3. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  4. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  5. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  6. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    NASA Astrophysics Data System (ADS)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  10. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  11. Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples.

    PubMed

    Kasturi, Kuppuswamy N; Drgon, Tomas

    2017-07-15

    The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA , group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non- Salmonella organisms. The invA - and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella -differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the V itek i mmuno d iagnostic a ssay s ystem (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples.

  12. Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples

    PubMed Central

    Drgon, Tomas

    2017-01-01

    ABSTRACT The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA, group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non-Salmonella organisms. The invA- and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella-differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S. Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the Vitek immunodiagnostic assay system (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples. PMID:28500041

  13. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Testing Standard Reliability Criteria

    ERIC Educational Resources Information Center

    Sherry, David

    2017-01-01

    Maul's paper, "Rethinking Traditional Methods of Survey Validation" (Andrew Maul), contains two stages. First he presents empirical results that cast doubt on traditional methods for validating psychological measurement instruments. These results motivate the second stage, a critique of current conceptions of psychological measurement…

  15. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  16. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  17. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  18. DBS-LC-MS/MS assay for caffeine: validation and neonatal application.

    PubMed

    Bruschettini, Matteo; Barco, Sebastiano; Romantsik, Olga; Risso, Francesco; Gennai, Iulian; Chinea, Benito; Ramenghi, Luca A; Tripodi, Gino; Cangemi, Giuliana

    2016-09-01

    DBS might be an appropriate microsampling technique for therapeutic drug monitoring of caffeine in infants. Nevertheless, its application presents several issues that still limit its use. This paper describes a validated DBS-LC-MS/MS method for caffeine. The results of the method validation showed an hematocrit dependence. In the analysis of 96 paired plasma and DBS clinical samples, caffeine levels measured in DBS were statistically significantly lower than in plasma but the observed differences were independent from hematocrit. These results clearly showed the need for extensive validation with real-life samples for DBS-based methods. DBS-LC-MS/MS can be considered to be a good alternative to traditional methods for therapeutic drug monitoring or PK studies in preterm infants.

  19. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  20. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  2. Determination of serum levels of imatinib mesylate in patients with chronic myeloid leukemia: validation and application of a new analytical method to monitor treatment compliance

    PubMed Central

    Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel

    2013-01-01

    Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187

  3. A method for validation of finite element forming simulation on basis of a pointwise comparison of distance and curvature

    NASA Astrophysics Data System (ADS)

    Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank

    2016-10-01

    Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.

  4. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  5. Field validation of the dnph method for aldehydes and ketones. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Workman, G.S.; Steger, J.L.

    1996-04-01

    A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less

  6. A statistical approach to selecting and confirming validation targets in -omics experiments

    PubMed Central

    2012-01-01

    Background Genomic technologies are, by their very nature, designed for hypothesis generation. In some cases, the hypotheses that are generated require that genome scientists confirm findings about specific genes or proteins. But one major advantage of high-throughput technology is that global genetic, genomic, transcriptomic, and proteomic behaviors can be observed. Manual confirmation of every statistically significant genomic result is prohibitively expensive. This has led researchers in genomics to adopt the strategy of confirming only a handful of the most statistically significant results, a small subset chosen for biological interest, or a small random subset. But there is no standard approach for selecting and quantitatively evaluating validation targets. Results Here we present a new statistical method and approach for statistically validating lists of significant results based on confirming only a small random sample. We apply our statistical method to show that the usual practice of confirming only the most statistically significant results does not statistically validate result lists. We analyze an extensively validated RNA-sequencing experiment to show that confirming a random subset can statistically validate entire lists of significant results. Finally, we analyze multiple publicly available microarray experiments to show that statistically validating random samples can both (i) provide evidence to confirm long gene lists and (ii) save thousands of dollars and hundreds of hours of labor over manual validation of each significant result. Conclusions For high-throughput -omics studies, statistical validation is a cost-effective and statistically valid approach to confirming lists of significant results. PMID:22738145

  7. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  8. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  9. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.

  10. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  11. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  12. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  13. Validity and reliability of bioelectrical impedance analysis and skinfold thickness in predicting body fat in military personnel.

    PubMed

    Aandstad, Anders; Holtberget, Kristian; Hageberg, Rune; Holme, Ingar; Anderssen, Sigmund A

    2014-02-01

    Previous studies show that body composition is related to injury risk and physical performance in soldiers. Thus, valid methods for measuring body composition in military personnel are needed. The frequently used body mass index method is not a valid measure of body composition in soldiers, but reliability and validity of alternative field methods are less investigated in military personnel. Thus, we carried out test and retest of skinfold (SKF), single frequency bioelectrical impedance analysis (SF-BIA), and multifrequency bioelectrical impedance analysis measurements in 65 male and female soldiers. Several validated equations were used to predict percent body fat from these methods. Dual-energy X-ray absorptiometry was also measured, and acted as the criterion method. Results showed that SF-BIA was the most reliable method in both genders. In women, SF-BIA was also the most valid method, whereas SKF or a combination of SKF and SF-BIA produced the highest validity in men. Reliability and validity varied substantially among the equations examined. The best methods and equations produced test-retest 95% limits of agreement below ±1% points, whereas the corresponding validity figures were ±3.5% points. Each investigator and practitioner must consider whether such measurement errors are acceptable for its specific use. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  16. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  17. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  18. Alternative methods to determine headwater benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Y.S.; Perlack, R.D.; Sale, M.J.

    1997-11-10

    In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less

  19. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  20. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    PubMed

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  2. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  3. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method

    PubMed Central

    2017-01-01

    Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014). Conclusions This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. PMID:28292738

  4. Method development and validation of potent pyrimidine derivative by UV-VIS spectrophotometer.

    PubMed

    Chaudhary, Anshu; Singh, Anoop; Verma, Prabhakar Kumar

    2014-12-01

    A rapid and sensitive ultraviolet-visible (UV-VIS) spectroscopic method was developed for the estimation of pyrimidine derivative 6-Bromo-3-(6-(2,6-dichlorophenyl)-2-(morpolinomethylamino) pyrimidine4-yl) -2H-chromen-2-one (BT10M) in bulk form. Pyrimidine derivative was monitored at 275 nm with UV detection, and there is no interference of diluents at 275 nm. The method was found to be linear in the range of 50 to 150 μg/ml. The accuracy and precision were determined and validated statistically. The method was validated as a guideline. The results showed that the proposed method is suitable for the accurate, precise, and rapid determination of pyrimidine derivative. Graphical Abstract Method development and validation of potent pyrimidine derivative by UV spectroscopy.

  5. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  6. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  7. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Remote sensing imagery classification using multi-objective gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2016-10-01

    Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.

  9. Validation of Milliflex® Quantum for Bioburden Testing of Pharmaceutical Products.

    PubMed

    Gordon, Oliver; Goverde, Marcel; Staerk, Alexandra; Roesti, David

    2017-01-01

    This article reports the validation strategy used to demonstrate that the Milliflex ® Quantum yielded non-inferior results to the traditional bioburden method. It was validated according to USP <1223>, European Pharmacopoeia 5.1.6, and Parenteral Drug Association Technical Report No. 33 and comprised the validation parameters robustness, ruggedness, repeatability, specificity, limit of detection and quantification, accuracy, precision, linearity, range, and equivalence in routine operation. For the validation, a combination of pharmacopeial ATCC strains as well as a broad selection of in-house isolates were used. In-house isolates were used in stressed state. Results were statistically evaluated regarding the pharmacopeial acceptance criterion of ≥70% recovery compared to the traditional method. Post-hoc test power calculations verified the appropriateness of the used sample size to detect such a difference. Furthermore, equivalence tests verified non-inferiority of the rapid method as compared to the traditional method. In conclusion, the rapid bioburden on basis of the Milliflex ® Quantum was successfully validated as alternative method to the traditional bioburden test. LAY ABSTRACT: Pharmaceutical drug products must fulfill specified quality criteria regarding their microbial content in order to ensure patient safety. Drugs that are delivered into the body via injection, infusion, or implantation must be sterile (i.e., devoid of living microorganisms). Bioburden testing measures the levels of microbes present in the bulk solution of a drug before sterilization, and thus it provides important information for manufacturing a safe product. In general, bioburden testing has to be performed using the methods described in the pharmacopoeias (membrane filtration or plate count). These methods are well established and validated regarding their effectiveness; however, the incubation time required to visually identify microbial colonies is long. Thus, alternative methods that detect microbial contamination faster will improve control over the manufacturing process and speed up product release. Before alternative methods may be used, they must undergo a side-by-side comparison with pharmacopeial methods. In this comparison, referred to as validation, it must be shown in a statistically verified manner that the effectiveness of the alternative method is at least equivalent to that of the pharmacopeial methods. Here we describe the successful validation of an alternative bioburden testing method based on fluorescent staining of growing microorganisms applying the Milliflex ® Quantum system by MilliporeSigma. © PDA, Inc. 2017.

  10. Validation of a Tool to Assess and Track Undergraduate Attitudes toward Those Living in Poverty

    ERIC Educational Resources Information Center

    Blair, Kevin D.; Brown, Marlo; Schoepflin, Todd; Taylor, David B.

    2014-01-01

    Purpose: This article describes the development and validation of the Undergraduate Perceptions of Poverty Tracking Survey (UPPTS). Method: Data were collected from 301 undergraduates at a small university in the Northeast and analyzed using exploratory factor analysis augmented by random qualitative validation. Results: The resulting survey…

  11. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  13. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    NASA Astrophysics Data System (ADS)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  14. MR Imaging Anatomy in Neurodegeneration: A Robust Volumetric Parcellation Method of the Frontal Lobe Gyri with Quantitative Validation in Patients with Dementia

    PubMed Central

    Iordanova, B.; Rosenbaum, D.; Norman, D.; Weiner, M.; Studholme, C.

    2007-01-01

    BACKGROUND AND PURPOSE Brain volumetry is widely used for evaluating tissue degeneration; however, the parcellation methods are rarely validated and use arbitrary planes to mark boundaries of brain regions. The goal of this study was to develop, validate, and apply an MR imaging tracing method for the parcellation of 3 major gyri of the frontal lobe, which uses only local landmarks intrinsic to the structures of interest, without the need for global reorientation or the use of dividing planes or lines. METHODS Studies were performed on 25 subjects—healthy controls and subjects diagnosed with Lewy body dementia and Alzheimer disease—with significant variation in the underlying gyral anatomy and state of atrophy. The protocol was evaluated by using multiple observers tracing scans of subjects diagnosed with neurodegenerative disease and those aging normally, and the results were compared by spatial overlap agreement. To confirm the results, observers marked the same locations in different brains. We illustrated the variabilities of the key boundaries that pose the greatest challenge to defining consistent parcellations across subjects. RESULTS The resulting gyral volumes were evaluated, and their consistency across raters was used as an additional assessment of the validity of our marking method. The agreement on a scale of 0–1 was found to be 0.83 spatial and 0.90 volumetric for the same rater and 0.85 spatial and 0.90 volumetric for 2 different raters. The results revealed that the protocol remained consistent across different neurodegenerative conditions. CONCLUSION Our method provides a simple and reliable way for the volumetric evaluation of frontal lobe neurodegeneration and can be used as a resource for larger comparative studies as well as a validation procedure of automated algorithms. PMID:16971629

  15. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  16. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    PubMed

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  17. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  18. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    PubMed

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  19. International ring trial for the validation of an event-specific Golden Rice 2 quantitative real-time polymerase chain reaction method.

    PubMed

    Jacchia, Sara; Nardini, Elena; Bassani, Niccolò; Savini, Christian; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco

    2015-05-27

    This article describes the international validation of the quantitative real-time polymerase chain reaction (PCR) detection method for Golden Rice 2. The method consists of a taxon-specific assay amplifying a fragment of rice Phospholipase D α2 gene, and an event-specific assay designed on the 3' junction between transgenic insert and plant DNA. We validated the two assays independently, with absolute quantification, and in combination, with relative quantification, on DNA samples prepared in haploid genome equivalents. We assessed trueness, precision, efficiency, and linearity of the two assays, and the results demonstrate that both the assays independently assessed and the entire method fulfill European and international requirements for methods for genetically modified organism (GMO) testing, within the dynamic range tested. The homogeneity of the results of the collaborative trial between Europe and Asia is a good indicator of the robustness of the method.

  20. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    PubMed Central

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155

  1. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  2. Hopes and Cautions for Instrument-Based Evaluation of Consent Capacity: Results of a Construct Validity Study of Three Instruments

    PubMed Central

    Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.

    2016-01-01

    Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455

  3. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    NASA Astrophysics Data System (ADS)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  4. Use of multiple cluster analysis methods to explore the validity of a community outcomes concept map.

    PubMed

    Orsi, Rebecca

    2017-02-01

    Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  6. Dynamic Rod Worth Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Y.A.; Chapman, D.M.; Hill, D.J.

    2000-12-15

    The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.

  7. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    PubMed

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  9. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  10. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  11. Verification and Validation Studies for the LAVA CFD Solver

    NASA Technical Reports Server (NTRS)

    Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.

    2013-01-01

    The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.

  12. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  13. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  14. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  15. [Traceability of Wine Varieties Using Near Infrared Spectroscopy Combined with Cyclic Voltammetry].

    PubMed

    Li, Meng-hua; Li, Jing-ming; Li, Jun-hui; Zhang, Lu-da; Zhao, Long-lian

    2015-06-01

    To achieve the traceability of wine varieties, a method was proposed to fuse Near-infrared (NIR) spectra and cyclic voltammograms (CV) which contain different information using D-S evidence theory. NIR spectra and CV curves of three different varieties of wines (cabernet sauvignon, merlot, cabernet gernischt) which come from seven different geographical origins were collected separately. The discriminant models were built using PLS-DA method. Based on this, D-S evidence theory was then applied to achieve the integration of the two kinds of discrimination results. After integrated by D-S evidence theory, the accuracy rate of cross-validation is 95.69% and validation set is 94.12% for wine variety identification. When only considering the wine that come from Yantai, the accuracy rate of cross-validation is 99.46% and validation set is 100%. All the traceability models after fusion achieved better results on classification than individual method. These results suggest that the proposed method combining electrochemical information with spectral information using the D-S evidence combination formula is benefit to the improvement of model discrimination effect, and is a promising tool for discriminating different kinds of wines.

  16. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Semiquantitative determination of mesophilic, aerobic microorganisms in cocoa products using the Soleris NF-TVC method.

    PubMed

    Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer

    2014-01-01

    The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.

  18. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  19. The convergent and discriminant validity of burnout measures in sport: a multi-trait/multi-method analysis.

    PubMed

    Cresswell, Scott L; Eklund, Robert C

    2006-02-01

    Athlete burnout research has been hampered by the lack of an adequate measurement tool. The Athlete Burnout Questionnaire (ABQ) and the Maslach Burnout Inventory General Survey (MBI-GS) are two recently developed self-report instruments designed to assess burnout. The convergent and discriminant validity of the ABQ and MBI-GS were assessed through multi-trait/multi-method analysis with a sporting population. Overall, the ABQ and the MBI-GS displayed acceptable convergent validity with matching subscales highly correlated, and satisfactory internal discriminant validity with lower correlations between non-matching subscales. Both scales also indicated an adequate discrimination between the concepts of burnout and depression. These findings add support to previous findings in non-sporting populations that depression and burnout are separate constructs. Based on the psychometric results, construct validity analysis and practical considerations, the results support the use of the ABQ to assess athlete burnout.

  20. Development and validation of a UPLC method for the determination of duloxetine hydrochloride residues on pharmaceutical manufacturing equipment surfaces

    PubMed Central

    Kumar, Navneet; Sangeetha, D.; Balakrishna, P.

    2011-01-01

    Background: In pharmaceutical industries, it is very important to remove drug residues from the equipment and areas used. The cleaning procedure must be validated, so special attention must be devoted to the methods used for analysis of trace amounts of drugs. A rapid, sensitive, and specific reverse phase ultra-performance liquid chromatographic (UPLC) method was developed for the quantitative determination of duloxetine in cleaning validation swab samples. Material and Methods: The method was validated using an Acquity UPLC™ HSS T3 (100 × 2.1 mm2) 1.8 μm column with a isocratic mobile phase containing a mixture of 0.01 M potassium dihydrogen orthophosphate, pH adjusted to 3.0 with orthophosphoric acid and acetonitrile (60:40 v/v). The flow rate of the mobile phase was 0.4 ml/min with a column temperature of 40°C and detection wavelength at 230 nm. Cotton swabs, moisten with extraction solution (90% methanol and 10% water), were used to remove any residue of drug from stainless steel, glass and silica surfaces, and give recoveries >80% at four concentration levels. Results: The precision of the results, reported as the relative standard deviation, were below 1.5%. The calibration curve was linear over a concentration range from 0.02 to 5.0 μg/ml with a correlation coefficient of 0.999. The detection limit and quantitation limit were 0.006 and 0.02 μg/ml, respectively. The method was validated over a concentration range of 0.05–5.0 μg/ml. Conclusion: The developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision, and robustness. PMID:23781449

  1. Development and validation of a UPLC-MS/MS method for simultaneous determination of fotagliptin and its two major metabolites in human plasma and urine.

    PubMed

    Wang, Zhenlei; Jiang, Ji; Hu, Pei; Zhao, Qian

    2017-02-01

    Fotagliptin is a novel dipeptidyl peptidase IV inhibitor under clinical development for the treatment of Type II diabetes mellitus. The objective of this study was to develop and validate a specific and sensitive ultra-performance liquid chromatography (UPLC)-MS/MS method for simultaneous determination of fotagliptin and its two major metabolites in human plasma and urine. Methodology & results: After being pretreated using an automatized procedure, the plasma and urine samples were separated and detected using a UPLC-ESI-MS/MS method, which was validated following the international guidelines. A selective and sensitive UPLC-MS/MS method was first developed and validated for quantifying fotagliptin and its metabolite in human plasma and urine. The method was successfully applied to support the clinical study of fotagliptin in Chinese healthy subjects.

  2. Session-RPE Method for Training Load Monitoring: Validity, Ecological Usefulness, and Influencing Factors

    PubMed Central

    Haddad, Monoem; Stylianides, Georgios; Djaoui, Leo; Dellal, Alexandre; Chamari, Karim

    2017-01-01

    Purpose: The aim of this review is to (1) retrieve all data validating the Session-rating of perceived exertion (RPE)-method using various criteria, (2) highlight the rationale of this method and its ecological usefulness, and (3) describe factors that can alter RPE and users of this method should take into consideration. Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE. Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10. Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults) among various expertise levels. This method could be used as “standing alone” method for training load (TL) monitoring purposes though some recommend to combine it with other physiological parameters as heart rate. PMID:29163016

  3. Optimisation and validation of a rapid and efficient microemulsion liquid chromatographic (MELC) method for the determination of paracetamol (acetaminophen) content in a suppository formulation.

    PubMed

    McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin

    2007-05-09

    A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.

  4. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil)

    PubMed Central

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria del Carmen Bisi

    2016-01-01

    Introduction Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. Objective To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. Subjects/Methods A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. Results The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food. PMID:28030625

  5. A qualitative and quantitative HPTLC densitometry method for the analysis of cannabinoids in Cannabis sativa L.

    PubMed

    Fischedick, Justin T; Glas, Ronald; Hazekamp, Arno; Verpoorte, Rob

    2009-01-01

    Cannabis and cannabinoid based medicines are currently under serious investigation for legitimate development as medicinal agents, necessitating new low-cost, high-throughput analytical methods for quality control. The goal of this study was to develop and validate, according to ICH guidelines, a simple rapid HPTLC method for the quantification of Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and qualitative analysis of other main neutral cannabinoids found in cannabis. The method was developed and validated with the use of pure cannabinoid reference standards and two medicinal cannabis cultivars. Accuracy was determined by comparing results obtained from the HTPLC method with those obtained from a validated HPLC method. Delta(9)-THC gives linear calibration curves in the range of 50-500 ng at 206 nm with a linear regression of y = 11.858x + 125.99 and r(2) = 0.9968. Results have shown that the HPTLC method is reproducible and accurate for the quantification of Delta(9)-THC in cannabis. The method is also useful for the qualitative screening of the main neutral cannabinoids found in cannabis cultivars.

  6. A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation

    NASA Astrophysics Data System (ADS)

    Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.

    2008-08-01

    This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.

  7. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  8. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  9. Local Validation of Global Estimates of Biosphere Properties: Synthesis of Scaling Methods and Results Across Several Major Biomes

    NASA Technical Reports Server (NTRS)

    Cohen, Warren B.; Wessman, Carol A.; Aber, John D.; VanderCaslte, John R.; Running, Steven W.

    1998-01-01

    To assist in validating future MODIS land cover, LAI, IPAR, and NPP products, this project conducted a series of prototyping exercises that resulted in enhanced understanding of the issues regarding such validation. As a result, we have several papers to appear as a special issue of Remote Sensing of Environment in 1999. Also, we have been successful at obtaining a follow-on grant to pursue actual validation of these products over the next several years. This document consists of a delivery letter, including a listing of published papers.

  10. Dynamic measurement of speed of sound in n-Heptane by ultrasonics during fuel injections.

    PubMed

    Minnetti, Elisa; Pandarese, Giuseppe; Evangelisti, Piersavio; Verdugo, Francisco Rodriguez; Ungaro, Carmine; Bastari, Alessandro; Paone, Nicola

    2017-11-01

    The paper presents a technique to measure the speed of sound in fuels based on pulse-echo ultrasound. The method is applied inside the test chamber of a Zeuch-type instrument used for indirect measurement of the injection rate (Mexus). The paper outlines the pulse-echo method, considering probe installation, ultrasound beam propagation inside the test chamber, typical signals obtained, as well as different processing algorithms. The method is validated in static conditions by comparing the experimental results to the NIST database both for water and n-Heptane. The ultrasonic system is synchronized to the injector so that time resolved samples of speed of sound can be successfully acquired during a series of injections. Results at different operating conditions in n-Heptane are shown. An uncertainty analysis supports the analysis of results and allows to validate the method. Experimental results show that the speed of sound variation during an injection event is less than 1%, so the Mexus model assumption to consider it constant during the injection is valid. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Validation of biological activity testing procedure of recombinant human interleukin-7.

    PubMed

    Lutsenko, T N; Kovalenko, M V; Galkin, O Yu

    2017-01-01

    Validation procedure for method of monitoring the biological activity of reсombinant human interleukin-7 has been developed and conducted according to the requirements of national and international recommendations. This method is based on the ability of recombinant human interleukin-7 to induce proliferation of T lymphocytes. It has been shown that to control the biological activity of recombinant human interleukin-7 peripheral blood mononuclear cells (PBMCs) derived from blood or cell lines can be used. Validation charac­teristics that should be determined depend on the method, type of product or object test/measurement and biological test systems used in research. The validation procedure for the method of control of biological activity of recombinant human interleukin-7 in peripheral blood mononuclear cells showed satisfactory results on all parameters tested such as specificity, accuracy, precision and linearity.

  12. A study in the founding of applied behavior analysis through its publications.

    PubMed

    Morris, Edward K; Altus, Deborah E; Smith, Nathaniel G

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research.

  13. Forward ultrasonic model validation using wavefield imaging methods

    NASA Astrophysics Data System (ADS)

    Blackshire, James L.

    2018-04-01

    The validation of forward ultrasonic wave propagation models in a complex titanium polycrystalline material system is accomplished using wavefield imaging methods. An innovative measurement approach is described that permits the visualization and quantitative evaluation of bulk elastic wave propagation and scattering behaviors in the titanium material for a typical focused immersion ultrasound measurement process. Results are provided for the determination and direct comparison of the ultrasonic beam's focal properties, mode-converted shear wave position and angle, and scattering and reflection from millimeter-sized microtexture regions (MTRs) within the titanium material. The approach and results are important with respect to understanding the root-cause backscatter signal responses generated in aerospace engine materials, where model-assisted methods are being used to understand the probabilistic nature of the backscatter signal content. Wavefield imaging methods are shown to be an effective means for corroborating and validating important forward model predictions in a direct manner using time- and spatially-resolved displacement field amplitude measurements.

  14. A Study in the Founding of Applied Behavior Analysis Through Its Publications

    PubMed Central

    Morris, Edward K.; Altus, Deborah E.; Smith, Nathaniel G.

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research. PMID:25729133

  15. Landscape scale estimation of soil carbon stock using 3D modelling.

    PubMed

    Veronesi, F; Corstanje, R; Mayr, T

    2014-07-15

    Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  17. Validated spectrophotometric method for the determination, spectroscopic characterization and thermal structural analysis of duloxetine with 1,2-naphthoquinone-4-sulphonate

    NASA Astrophysics Data System (ADS)

    Ulu, Sevgi Tatar; Elmali, Fikriye Tuncel

    2012-03-01

    A novel, selective, sensitive and simple spectrophotometric method was developed and validated for the determination of the antidepressant duloxetine hydrochloride in pharmaceutical preparation. The method was based on the reaction of duloxetine hydrochloride with 1,2-naphthoquinone-4-sulphonate (NQS) in alkaline media to yield orange colored product. The formation of this complex was also confirmed by UV-visible, FTIR, 1H NMR, Mass spectra techniques and thermal analysis. This method was validated for various parameters according to ICH guidelines. Beer's law is obeyed in a range of 5.0-60 μg/mL at the maximum absorption wavelength of 480 nm. The detection limit is 0.99 μg/mL and the recovery rate is in a range of 98.10-99.57%. The proposed methods was validated and applied to the determination of duloxetine hydrochloride in pharmaceutical preparation. The results were statistically analyzed and compared to those of a reference UV spectrophotometric method.

  18. Vacuum decay container closure integrity leak test method development and validation for a lyophilized product-package system.

    PubMed

    Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton

    2011-01-01

    A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.

  19. Designing the Nuclear Energy Attitude Scale.

    ERIC Educational Resources Information Center

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  20. Beware of external validation! - A Comparative Study of Several Validation Techniques used in QSAR Modelling.

    PubMed

    Majumdar, Subhabrata; Basak, Subhash C

    2018-04-26

    Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    PubMed

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. Transcriptomic SNP discovery for custom genotyping arrays: impacts of sequence data, SNP calling method and genotyping technology on the probability of validation success.

    PubMed

    Humble, Emily; Thorne, Michael A S; Forcada, Jaume; Hoffman, Joseph I

    2016-08-26

    Single nucleotide polymorphism (SNP) discovery is an important goal of many studies. However, the number of 'putative' SNPs discovered from a sequence resource may not provide a reliable indication of the number that will successfully validate with a given genotyping technology. For this it may be necessary to account for factors such as the method used for SNP discovery and the type of sequence data from which it originates, suitability of the SNP flanking sequences for probe design, and genomic context. To explore the relative importance of these and other factors, we used Illumina sequencing to augment an existing Roche 454 transcriptome assembly for the Antarctic fur seal (Arctocephalus gazella). We then mapped the raw Illumina reads to the new hybrid transcriptome using BWA and BOWTIE2 before calling SNPs with GATK. The resulting markers were pooled with two existing sets of SNPs called from the original 454 assembly using NEWBLER and SWAP454. Finally, we explored the extent to which SNPs discovered using these four methods overlapped and predicted the corresponding validation outcomes for both Illumina Infinium iSelect HD and Affymetrix Axiom arrays. Collating markers across all discovery methods resulted in a global list of 34,718 SNPs. However, concordance between the methods was surprisingly poor, with only 51.0 % of SNPs being discovered by more than one method and 13.5 % being called from both the 454 and Illumina datasets. Using a predictive modeling approach, we could also show that SNPs called from the Illumina data were on average more likely to successfully validate, as were SNPs called by more than one method. Above and beyond this pattern, predicted validation outcomes were also consistently better for Affymetrix Axiom arrays. Our results suggest that focusing on SNPs called by more than one method could potentially improve validation outcomes. They also highlight possible differences between alternative genotyping technologies that could be explored in future studies of non-model organisms.

  3. Validation of an in-vivo proton beam range check method in an anthropomorphic pelvic phantom using dose measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bentefour, El H., E-mail: hassan.bentefour@iba-group.com; Prieels, Damien; Tang, Shikui

    Purpose: In-vivo dosimetry and beam range verification in proton therapy could play significant role in proton treatment validation and improvements. In-vivo beam range verification, in particular, could enable new treatment techniques one of which could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. This paper reports validation study of an in-vivo range verification method which can reduce the range uncertainty to submillimeter levels and potentially allow for in-vivo dosimetry. Methods: An anthropomorphic pelvic phantom is used to validate the clinical potential of the time-resolved dose method for range verification inmore » the case of prostrate treatment using range modulated anterior proton beams. The method uses a 3 × 4 matrix of 1 mm diodes mounted in water balloon which are read by an ADC system at 100 kHz. The method is first validated against beam range measurements by dose extinction measurements. The validation is first completed in water phantom and then in pelvic phantom for both open field and treatment field configurations. Later, the beam range results are compared with the water equivalent path length (WEPL) values computed from the treatment planning system XIO. Results: Beam range measurements from both time-resolved dose method and the dose extinction method agree with submillimeter precision in water phantom. For the pelvic phantom, when discarding two of the diodes that show sign of significant range mixing, the two methods agree with ±1 mm. Only a dose of 7 mGy is sufficient to achieve this result. The comparison to the computed WEPL by the treatment planning system (XIO) shows that XIO underestimates the protons beam range. Quantifying the exact XIO range underestimation depends on the strategy used to evaluate the WEPL results. To our best evaluation, XIO underestimates the treatment beam range between a minimum of 1.7% and maximum of 4.1%. Conclusions: Time-resolved dose measurement method satisfies the two basic requirements, WEPL accuracy and minimum dose, necessary for clinical use, thus, its potential for in-vivo protons range verification. Further development is needed, namely, devising a workflow that takes into account the limits imposed by proton range mixing and the susceptibility of the comparison of measured and expected WEPLs to errors on the detector positions. The methods may also be used for in-vivo dosimetry and could benefit various proton therapy treatments.« less

  4. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    PubMed

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.

  5. Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures

    NASA Astrophysics Data System (ADS)

    Balagopal, R.; Prasad Rao, N.; Rokade, R. P.

    2018-04-01

    Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.

  6. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  7. Imputation of missing data in time series for air pollutants

    NASA Astrophysics Data System (ADS)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  8. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  9. Large scale study of multiple-molecule queries

    PubMed Central

    2009-01-01

    Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525

  10. In-house validation of a liquid chromatography-tandem mass spectrometry method for the determination of selective androgen receptor modulators (SARMS) in bovine urine.

    PubMed

    Schmidt, Kathrin S; Mankertz, Joachim

    2018-06-01

    A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.

  11. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  12. Ecological content validation of the Information Assessment Method for parents (IAM-parent): A mixed methods study.

    PubMed

    Bujold, M; El Sherif, R; Bush, P L; Johnson-Lafleur, J; Doray, G; Pluye, P

    2018-02-01

    This mixed methods study content validated the Information Assessment Method for parents (IAM-parent) that allows users to systematically rate and comment on online parenting information. Quantitative data and results: 22,407 IAM ratings were collected; of the initial 32 items, descriptive statistics showed that 10 had low relevance. Qualitative data and results: IAM-based comments were collected, and 20 IAM users were interviewed (maximum variation sample); the qualitative data analysis assessed the representativeness of IAM items, and identified items with problematic wording. Researchers, the program director, and Web editors integrated quantitative and qualitative results, which led to a shorter and clearer IAM-parent. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Best Practices in Stability Indicating Method Development and Validation for Non-clinical Dose Formulations.

    PubMed

    Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W

    2016-11-01

    Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.

  14. Evaluation of coarse scale land surface remote sensing albedo product over rugged terrain

    NASA Astrophysics Data System (ADS)

    Wen, J.; Xinwen, L.; You, D.; Dou, B.

    2017-12-01

    Satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. The accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. And more literatures investigated the validation methods about the albedo validation in a flat or homogenous surface. However, the albedo performance over rugged terrain is still unknow due to the validation method limited. A multi-validation strategy is implemented to give a comprehensive albedo validation, which will involve the high resolution albedo processing, high resolution albedo validation based on in situ albedo, and the method to upscale the high resolution albedo to a coarse scale albedo. Among them, the high resolution albedo generation and the upscale method is the core step for the coarse scale albedo validation. In this paper, the high resolution albedo is generated by Angular Bin algorithm. And a albedo upscale method over rugged terrain is developed to obtain the coarse scale albedo truth. The in situ albedo located 40 sites in mountain area are selected globally to validate the high resolution albedo, and then upscaled to the coarse scale albedo by the upscale method. This paper takes MODIS and GLASS albedo product as a example, and the prelimarily results show the RMSE of MODIS and GLASS albedo product over rugged terrain are 0.047 and 0.057, respectively under the RMSE with 0.036 of high resolution albedo.

  15. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  16. Simultaneous determination of ten Aconitum alkaloids in rat tissues by UHPLC-MS/MS and its application to a tissue distribution study on the compatibility of Heishunpian and Fritillariae thunbergii Bulbus.

    PubMed

    Yang, Bin; Xu, Yanyan; Wu, Yuanyuan; Wu, Huanyu; Wang, Yuan; Yuan, Lei; Xie, Jiabin; Li, Yubo; Zhang, Yanjun

    2016-10-15

    A rapid, sensitive and selective ultra-high performance liquid chromatography with tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for simultaneous determination of ten Aconitum alkaloids in rat tissues. The tissue samples were prepared by a simple procedure protein precipitation with acetonitrile containing 0.1% acetic acid and separated on an Agilent XDB C18 column (4.6 mm×50mm, 1.8μm) using gradient elution with a mobile phase consisting of water and acetonitrile (both containing 0.1% formic acid) at a flow rate of 0.3mL/min. The quantitive determination was performed on an electrospray ionization (ESI) triple quadrupole tandem mass spectrometer using selective reaction monitoring (SRM) under positive ionization mode. The established method was fully validated according to the USA Food and Drug Administration (FDA) bioanalytical method validation guidance and the results demonstrated that the method was sensitive and selective with the lowest limits of quantification (LLOQ) at 0.025ng/mL in rat tissue homogenates. Meanwhile, the linearity, precision, accuracy, extraction recovery, matrix effect and stability were all within the required limits of biological sample analysis. After method validation, the validated method was successfully applied to the tissue distribution study on the compatibility of Heishunpian (HSP, the processed product of Aconitum carmichaelii Debx) and Fritillariae thunbergii Bulbus (Zhebeimu, ZBM). The results indicated that the distribution feature of monoester diterpenoid aconitines (MDAs), diester diterpenoid aconitines (DDAs) and non-ester alkaloids (NEAs) were inconsistency, and the compatibility of HSP and ZBM resulted in the distribution amount of DDAs increased in tissues. What's more, the results could provide the reliable basis for systematic research on the substance foundation of the compatibility of the herbal pair. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Combined 3D-QSAR, molecular docking and molecular dynamics study on thyroid hormone activity of hydroxylated polybrominated diphenyl ethers to thyroid receptors β

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xiaolin; Ye, Li; Wang, Xiaoxiang

    2012-12-15

    Several recent reports suggested that hydroxylated polybrominated diphenyl ethers (HO-PBDEs) may disturb thyroid hormone homeostasis. To illuminate the structural features for thyroid hormone activity of HO-PBDEs and the binding mode between HO-PBDEs and thyroid hormone receptor (TR), the hormone activity of a series of HO-PBDEs to thyroid receptors β was studied based on the combination of 3D-QSAR, molecular docking, and molecular dynamics (MD) methods. The ligand- and receptor-based 3D-QSAR models were obtained using Comparative Molecular Similarity Index Analysis (CoMSIA) method. The optimum CoMSIA model with region focusing yielded satisfactory statistical results: leave-one-out cross-validation correlation coefficient (q{sup 2}) was 0.571 andmore » non-cross-validation correlation coefficient (r{sup 2}) was 0.951. Furthermore, the results of internal validation such as bootstrapping, leave-many-out cross-validation, and progressive scrambling as well as external validation indicated the rationality and good predictive ability of the best model. In addition, molecular docking elucidated the conformations of compounds and key amino acid residues at the docking pocket, MD simulation further determined the binding process and validated the rationality of docking results. -- Highlights: ► The thyroid hormone activities of HO-PBDEs were studied by 3D-QSAR. ► The binding modes between HO-PBDEs and TRβ were explored. ► 3D-QSAR, molecular docking, and molecular dynamics (MD) methods were performed.« less

  18. Gamifying Self-Management of Chronic Illnesses: A Mixed-Methods Study

    PubMed Central

    Wills, Gary; Ranchhod, Ashok

    2016-01-01

    Background Self-management of chronic illnesses is an ongoing issue in health care research. Gamification is a concept that arose in the field of computer science and has been borrowed by many other disciplines. It is perceived by many that gamification can improve the self-management experience of people with chronic illnesses. This paper discusses the validation of a framework (called The Wheel of Sukr) that was introduced to achieve this goal. Objective This research aims to (1) discuss a gamification framework targeting the self-management of chronic illnesses and (2) validate the framework by diabetic patients, medical professionals, and game experts. Methods A mixed-method approach was used to validate the framework. Expert interviews (N=8) were conducted in order to validate the themes of the framework. Additionally, diabetic participants completed a questionnaire (N=42) in order to measure their attitudes toward the themes of the framework. Results The results provide a validation of the framework. This indicates that gamification might improve the self-management of chronic illnesses, such as diabetes. Namely, the eight themes in the Wheel of Sukr (fun, esteem, socializing, self-management, self-representation, motivation, growth, sustainability) were perceived positively by 71% (30/42) of the participants with P value <.001. Conclusions In this research, both the interviews and the questionnaire yielded positive results that validate the framework (The Wheel of Sukr). Generally, this study indicates an overall acceptance of the notion of gamification in the self-management of diabetes. PMID:27612632

  19. Validation and Clinical Evaluation of a Novel Method To Measure Miltefosine in Leishmaniasis Patients Using Dried Blood Spot Sample Collection

    PubMed Central

    Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.

    2016-01-01

    To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691

  20. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry

    PubMed Central

    Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D

    2015-01-01

    Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925

  2. Pinaverium Bromide: Development and Validation of Spectrophotometric Methods for Assay and Dissolution Studies.

    PubMed

    Martins, Danielly da Fonte Carvalho; Florindo, Lorena Coimbra; Machado, Anna Karolina Mouzer da Silva; Todeschini, Vítor; Sangoi, Maximiliano da Silva

    2017-11-01

    This study presents the development and validation of UV spectrophotometric methods for the determination of pinaverium bromide (PB) in tablet assay and dissolution studies. The methods were satisfactorily validated according to International Conference on Harmonization guidelines. The response was linear (r2 > 0.99) in the concentration ranges of 2-14 μg/mL at 213 nm and 10-70 μg/mL at 243 nm. The LOD and LOQ were 0.39 and 1.31 μg/mL, respectively, at 213 nm. For the 243 nm method, the LOD and LOQ were 2.93 and 9.77 μg/mL, respectively. Precision was evaluated by RSD, and the obtained results were lower than 2%. Adequate accuracy was also obtained. The methods proved to be robust using a full factorial design evaluation. For PB dissolution studies, the best conditions were achieved using a United States Pharmacopeia Dissolution Apparatus 2 (paddle) at 50 rpm and with 900 mL 0.1 M hydrochloric acid as the dissolution medium, presenting satisfactory results during the validation tests. In addition, the kinetic parameters of drug release were investigated using model-dependent methods, and the dissolution profiles were best described by the first-order model. Therefore, the proposed methods were successfully applied for the assay and dissolution analysis of PB in commercial tablets.

  3. Reliability and Validity of the Alcohol Consequences Expectations Scale

    ERIC Educational Resources Information Center

    Arriola, Kimberly R. Jacob; Usdan, Stuart; Mays, Darren; Weitzel, Jessica Aungst; Cremeens, Jennifer; Martin, Ryan J.; Borba, Christina; Bernhardt, Jay M.

    2009-01-01

    Objectives: To examine the reliability and validity of a new measure of alcohol outcome expectations for college students, the Alcohol Consequences Expectations Scale (ACES). Methods: College students (N = 169) completed the ACES and several other measures. Results: Results support the existence of 5 internally consistent subscales. Additionally,…

  4. Reliability and Validity of Isometric Knee Extensor Strength Test With Hand-Held Dynamometer Depending on Its Fixation: A Pilot Study

    PubMed Central

    Kim, Won Kuel; Seo, Kyung Mook; Kang, Si Hyun

    2014-01-01

    Objective To determine the reliability and validity of hand-held dynamometer (HHD) depending on its fixation in measuring isometric knee extensor strength by comparing the results with an isokinetic dynamometer. Methods Twenty-seven healthy female volunteers participated in this study. The subjects were tested in seated and supine position using three measurement methods: isometric knee extension by isokinetic dynamometer, non-fixed HHD, and fixed HHD. During the measurement, the knee joints of subjects were fixed at a 35° angle from the extended position. The fixed HHD measurement was conducted with the HHD fixed to distal tibia with a Velcro strap; non-fixed HHD was performed with a hand-held method without Velcro fixation. All the measurements were repeated three times and among them, the maximum values of peak torque were used for the analysis. Results The data from the fixed HHD method showed higher validity than the non-fixed method compared with the results of the isokinetic dynamometer. Pearson correlation coefficients (r) between fixed HHD and isokinetic dynamometer method were statistically significant (supine-right: r=0.806, p<0.05; seating-right: r=0.473, p<0.05; supine-left: r=0.524, p<0.05), whereas Pearson correlation coefficients between non-fixed dynamometer and isokinetic dynamometer methods were not statistically significant, except for the result of the supine position of the left leg (r=0.384, p<0.05). Both fixed and non-fixed HHD methods showed excellent inter-rater reliability. However, the fixed HHD method showed a higher reliability than the non-fixed HHD method by considering the intraclass correlation coefficient (fixed HHD, 0.952-0.984; non-fixed HHD, 0.940-0.963). Conclusion Fixation of HHD during measurement in the supine position increases the reliability and validity in measuring the quadriceps strength. PMID:24639931

  5. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  6. Risk-Based Prioritization Method for the Classification of Groundwater Pollution from Hazardous Waste Landfills.

    PubMed

    Yang, Yu; Jiang, Yong-Hai; Lian, Xin-Ying; Xi, Bei-Dou; Ma, Zhi-Fei; Xu, Xiang-Jian; An, Da

    2016-12-01

    Hazardous waste landfill sites are a significant source of groundwater pollution. To ensure that these landfills with a significantly high risk of groundwater contamination are properly managed, a risk-based ranking method related to groundwater contamination is needed. In this research, a risk-based prioritization method for the classification of groundwater pollution from hazardous waste landfills was established. The method encompasses five phases, including risk pre-screening, indicator selection, characterization, classification and, lastly, validation. In the risk ranking index system employed here, 14 indicators involving hazardous waste landfills and migration in the vadose zone as well as aquifer were selected. The boundary of each indicator was determined by K-means cluster analysis and the weight of each indicator was calculated by principal component analysis. These methods were applied to 37 hazardous waste landfills in China. The result showed that the risk for groundwater contamination from hazardous waste landfills could be ranked into three classes from low to high risk. In all, 62.2 % of the hazardous waste landfill sites were classified in the low and medium risk classes. The process simulation method and standardized anomalies were used to validate the result of risk ranking; the results were consistent with the simulated results related to the characteristics of contamination. The risk ranking method was feasible, valid and can provide reference data related to risk management for groundwater contamination at hazardous waste landfill sites.

  7. Further assessment of a method to estimate reliability and validity of qualitative research findings.

    PubMed

    Hinds, P S; Scandrett-Hibden, S; McAulay, L S

    1990-04-01

    The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.

  8. Development and Validation of the Sorokin Psychosocial Love Inventory for Divorced Individuals

    ERIC Educational Resources Information Center

    D'Ambrosio, Joseph G.; Faul, Anna C.

    2013-01-01

    Objective: This study describes the development and validation of the Sorokin Psychosocial Love Inventory (SPSLI) measuring love actions toward a former spouse. Method: Classical measurement theory and confirmatory factor analysis (CFA) were utilized with an a priori theory and factor model to validate the SPSLI. Results: A 15-item scale…

  9. Investigation of Content and Face Validity and Reliability of Sociocultural Attitude towards Appearance Questionnaire-3 (SATAQ-3) among Female Adolescents

    PubMed Central

    Mousazadeh, Somayeh; Rakhshan, Mahnaz; Mohammadi, Fateme

    2017-01-01

    Objective: This study aimed to determine the psychometric properties of sociocultural attitude towards appearance questionnaire in female adolescents. Method: This was a methodological study. The English version of the questionnaire was translated into Persian, using forward-backward method. Then the face validity, content validity and reliability were checked. To ensure face validity, the questionnaire was given to 25 female adolescents, a psychologist and three nurses, who were required to evaluate the items with respect to problems, ambiguity, relativity, proper terms and grammar, and understandability. For content validity, 15 experts in psychology and nursing, who met the inclusion criteria, were required. They were asked to assess the qualitative of content validity. To determine the quantitative content validity, content validity index and content validity ratio were calculated. At the end, internal consistency of the items was assessed, using Cronbach’s alpha method. Results: According to the expert judgments, content validity ratio was 0.81 and content validity index was 0.91. Besides, the reliability of the questionnaire was confirmed with Cronbach’s alpha = 0.91, and physical and developmental areas showed the highest reliability indices. Conclusion: The aforementioned questionnaire could be used in researches to assess female adolescents’ self-concept. This can be a stepping-stone towards identification of problems and improvement of adolescents’ body image. PMID:28496497

  10. A Psychometric Study of the Bayley Scales of Infant and Toddler Development in Persian Language Children

    PubMed Central

    AZARI, Nadia; SOLEIMANI, Farin; VAMEGHI, Roshanak; SAJEDI, Firoozeh; SHAHSHAHANI, Soheila; KARIMI, Hossein; KRASKIAN, Adis; SHAHROKHI, Amin; TEYMOURI, Robab; GHARIB, Masoud

    2017-01-01

    Objective Bayley Scales of infant & toddler development is a well-known diagnostic developmental assessment tool for children aged 1–42 months. Our aim was investigating the validity & reliability of this scale in Persian speaking children. Materials & Methods The method was descriptive-analytic. Translation- back translation and cultural adaptation was done. Content & face validity of translated scale was determined by experts’ opinions. Overall, 403 children aged 1 to 42 months were recruited from health centers of Tehran, during years of 2013-2014 for developmental assessment in cognitive, communicative (receptive & expressive) and motor (fine & gross) domains. Reliability of scale was calculated through three methods; internal consistency using Cronbach’s alpha coefficient, test-retest and interrater methods. Construct validity was calculated using factor analysis and comparison of the mean scores methods. Results Cultural and linguistic changes were made in items of all domains especially on communication subscale. Content and face validity of the test were approved by experts’ opinions. Cronbach’s alpha coefficient was above 0.74 in all domains. Pearson correlation coefficient in various domains, were ≥ 0.982 in test retest method, and ≥0.993 in inter-rater method. Construct validity of the test was approved by factor analysis. Moreover, the mean scores for the different age groups were compared and statistically significant differences were observed between mean scores of different age groups, that confirms validity of the test. Conclusion The Bayley Scales of Infant and Toddler Development is a valid and reliable tool for child developmental assessment in Persian language children. PMID:28277556

  11. Process Skill Assessment Instrument: Innovation to measure student’s learning result holistically

    NASA Astrophysics Data System (ADS)

    Azizah, K. N.; Ibrahim, M.; Widodo, W.

    2018-01-01

    Science process skills (SPS) are very important skills for students. However, the fact that SPS is not being main concern in the primary school learning is undeniable. This research aimed to develop a valid, practical, and effective assessment instrument to measure student’s SPS. Assessment instruments comprise of worksheet and test. This development research used one group pre-test post-test design. Data were obtained with validation, observation, and test method to investigate validity, practicality, and the effectivenss of the instruments. Results showed that the validity of assessment instruments is very valid, the reliability is categorized as reliable, student SPS activities have a high percentage, and there is significant improvement on student’s SPS score. It can be concluded that assessment instruments of SPS are valid, practical, and effective to be used to measure student’s SPS result.

  12. Implementation and Validation of an Impedance Eduction Technique

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.

    2011-01-01

    Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.

  13. Validation of four different spectrophotometric methods for simultaneous determination of Domperidone and Ranitidine in bulk and pharmaceutical formulation.

    PubMed

    Abdel-Ghany, Maha F; Abdel-Aziz, Omar; Mohammed, Yomna Y

    2015-01-01

    Four simple, specific, accurate and precise spectrophotometric methods were developed and validated for simultaneous determination of Domperidone (DP) and Ranitidine Hydrochloride (RT) in bulk powder and pharmaceutical formulation. The first method was simultaneous ratio subtraction (SRS), the second was ratio subtraction (RS) coupled with zero order spectrophotometry (D(0)), the third was first derivative of the ratio spectra ((1)DD) and the fourth method was mean centering of ratio spectra (MCR). The calibration curve is linear over the concentration range of 0.5-5 and 1-45 μg mL(-1) for DP and RT, respectively. The proposed spectrophotometric methods can analyze both drugs without any prior separation steps. The selectivity of the adopted methods was tested by analyzing synthetic mixtures of the investigated drugs, also in their pharmaceutical formulation. The suggested methods were validated according to International Conference of Harmonization (ICH) guidelines and the results revealed that; they were precise and reproducible. All the obtained results were statistically compared with those of the reported method, where there was no significant difference. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  15. Validated spectrofluorimetric method for the determination of clonazepam in pharmaceutical preparations.

    PubMed

    Ibrahim, Fawzia; El-Enany, Nahed; Shalan, Shereen; Elsharawy, Rasha

    2016-05-01

    A simple, highly sensitive and validated spectrofluorimetric method was applied in the determination of clonazepam (CLZ). The method is based on reduction of the nitro group of clonazepam with zinc/CaCl2, and the product is then reacted with 2-cyanoacetamide (2-CNA) in the presence of ammonia (25%) yielding a highly fluorescent product. The produced fluorophore exhibits strong fluorescence intensity at ʎ(em) = 383 nm after excitation at ʎ(ex) = 333 nm. The method was rectilinear over a concentration range of 0.1-0.5 ng/mL with a limit of detection (LOD) of 0.0057 ng/mL and a limit of quantification (LOQ) of 0.017 ng/mL. The method was fully validated and successfully applied to the determination of CLZ in its tablets with a mean percentage recovery of 100.10 ± 0.75%. Method validation according to ICH Guidelines was evaluated. Statistical analysis of the results obtained using the proposed method was successfully compared with those obtained using a reference method, and there was no significance difference between the two methods in terms of accuracy and precision. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Validation for Vegetation Green-up Date Extracted from GIMMS NDVI and NDVI3g Using Variety of Methods

    NASA Astrophysics Data System (ADS)

    Chang, Q.; Jiao, W.

    2017-12-01

    Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.

  17. Climate change vulnerability for species-Assessing the assessments.

    PubMed

    Wheatley, Christopher J; Beale, Colin M; Bradbury, Richard B; Pearce-Higgins, James W; Critchlow, Rob; Thomas, Chris D

    2017-09-01

    Climate change vulnerability assessments are commonly used to identify species at risk from global climate change, but the wide range of methodologies available makes it difficult for end users, such as conservation practitioners or policymakers, to decide which method to use as a basis for decision-making. In this study, we evaluate whether different assessments consistently assign species to the same risk categories and whether any of the existing methodologies perform well at identifying climate-threatened species. We compare the outputs of 12 climate change vulnerability assessment methodologies, using both real and simulated species, and validate the methods using historic data for British birds and butterflies (i.e. using historical data to assign risks and more recent data for validation). Our results show that the different vulnerability assessment methods are not consistent with one another; different risk categories are assigned for both the real and simulated sets of species. Validation of the different vulnerability assessments suggests that methods incorporating historic trend data into the assessment perform best at predicting distribution trends in subsequent time periods. This study demonstrates that climate change vulnerability assessments should not be used interchangeably due to the poor overall agreement between methods when considering the same species. The results of our validation provide more support for the use of trend-based rather than purely trait-based approaches, although further validation will be required as data become available. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  18. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    PubMed Central

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  19. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Enhancement of Chemical Entity Identification in Text Using Semantic Similarity Validation

    PubMed Central

    Grego, Tiago; Couto, Francisco M.

    2013-01-01

    With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit), maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/. PMID:23658791

  1. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    PubMed

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014). This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. ©Hani Badran, Pierre Pluye, Roland Grad. Originally published in JMIR Medical Education (http://mededu.jmir.org), 14.03.2017.

  2. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  3. Development and Validation of a Safety Climate Scale for Manufacturing Industry

    PubMed Central

    Ghahramani, Abolfazl; Khalkhali, Hamid R.

    2015-01-01

    Background This paper describes the development of a scale for measuring safety climate. Methods This study was conducted in six manufacturing companies in Iran. The scale developed through conducting a literature review about the safety climate and constructing a question pool. The number of items was reduced to 71 after performing a screening process. Results The result of content validity analysis showed that 59 items had excellent item content validity index (≥ 0.78) and content validity ratio (> 0.38). The exploratory factor analysis resulted in eight safety climate dimensions. The reliability value for the final 45-item scale was 0.96. The result of confirmatory factor analysis showed that the safety climate model is satisfactory. Conclusion This study produced a valid and reliable scale for measuring safety climate in manufacturing companies. PMID:26106508

  4. Content validity and reliability of test of gross motor development in Chilean children

    PubMed Central

    Cano-Cappellacci, Marcelo; Leyton, Fernanda Aleitte; Carreño, Joshua Durán

    2016-01-01

    ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2) for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries. PMID:26815160

  5. Psychometric and cognitive validation of a social capital measurement tool in Peru and Vietnam.

    PubMed

    De Silva, Mary J; Harpham, Trudy; Tuan, Tran; Bartolini, Rosario; Penny, Mary E; Huttly, Sharon R

    2006-02-01

    Social capital is a relatively new concept which has attracted significant attention in recent years. No consensus has yet been reached on how to measure social capital, resulting in a large number of different tools available. While psychometric validation methods such as factor analysis have been used by a few studies to assess the internal validity of some tools, these techniques rely on data already collected by the tool and are therefore not capable of eliciting what the questions are actually measuring. The Young Lives (YL) study includes quantitative measures of caregiver's social capital in four countries (Vietnam, Peru, Ethiopia, and India) using a short version of the Adapted Social Capital Assessment Tool (SASCAT). A range of different psychometric methods including factor analysis were used to evaluate the construct validity of SASCAT in Peru and Vietnam. In addition, qualitative cognitive interviews with 20 respondents from Peru and 24 respondents from Vietnam were conducted to explore what each question is actually measuring. We argue that psychometric validation techniques alone are not sufficient to adequately validate multi-faceted social capital tools for use in different cultural settings. Psychometric techniques show SASCAT to be a valid tool reflecting known constructs and displaying postulated links with other variables. However, results from the cognitive interviews present a more mixed picture with some questions being appropriately interpreted by respondents, and others displaying significant differences between what the researchers intended them to measure and what they actually do. Using evidence from a range of methods of assessing validity has enabled the modification of an existing instrument into a valid and low cost tool designed to measure social capital within larger surveys in Peru and Vietnam, with the potential for use in other developing countries following local piloting and cultural adaptation of the tool.

  6. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project

    PubMed Central

    Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    Objective To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. Materials and methods The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Results Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. Conclusion In this sample and for these questions, SMS was a reliable and valid method for capturing research data. PMID:22539081

  7. Validation of QuEChERS analytical technique for organochlorines and synthetic pyrethroids in fruits and vegetables using GC-ECD.

    PubMed

    Dubey, J K; Patyal, S K; Sharma, Ajay

    2018-03-19

    In the present day scenario of increasing awareness and concern about the pesticides, it is very important to ensure the quality of data being generated in pesticide residue analysis. To impart confidence in the products, terms like quality assurance and quality control are used as an integral part of quality management. In order to ensure better quality of results in pesticide residue analysis, validation of analytical methods to be used is extremely important. Keeping in view the importance of validation of method, the validation of QuEChERS (quick, easy, cheap, effective, rugged, and safe) a multiresidue method for extraction of 13 organochlorines and seven synthetic pyrethroids in fruits and vegetables followed by GC ECD for quantification was done so as to use this method for analysis of samples received in the laboratory. The method has been validated as per the Guidelines issued by SANCO (French words Sante for Health and Consommateurs for Consumers) in accordance with their document SANCO/XXXX/2013. Various parameters analyzed, viz., linearity, specificity, repeatability, reproducibility, and ruggedness were found to have acceptable values with a per cent RSD of less than 10%. Limit of quantification (LOQ) for the organochlorines was established to be 0.01 and 0.05 mg kg -1 for the synthetic pyrethroids. The uncertainty of the measurement (MU) for all these compounds ranged between 1 and 10%. The matrix-match calibration was used to compensate the matrix effect on the quantification of the compounds. The overall recovery of the method ranged between 80 and 120%. These results demonstrate the applicability and acceptability of this method in routine estimation of pesticide residues of these 20 pesticides in the fruits and vegetables by the laboratory.

  8. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Validation of a hydride generation atomic absorption spectrometry methodology for determination of mercury in fish designed for application in the Brazilian national residue control plan.

    PubMed

    Damin, Isabel C F; Santo, Maria A E; Hennigen, Rosmari; Vargas, Denise M

    2013-01-01

    In the present study, a method for the determination of mercury (Hg) in fish was validated according to ISO/IEC 17025, INMETRO (Brazil), and more recent European recommendations (Commission Decision 2007/333/EC and 2002/657/EC) for implementation in the Brazilian Residue Control Plan (NRCP) in routine applications. The parameters evaluated in the validation were investigated in detail. The results obtained for limit of detection and quantification were respectively, 2.36 and 7.88 μg kg(-1) of Hg. While the recovery varies between 90-96%. The coefficient of variation was of 4.06-8.94% for the repeatability. Furthermore, a comparison using an external proficiency testing scheme was realized. The results of method validated for the determination of the mercury in fish by Hydride generation atomic absorption spectrometry were considered suitable for implementation in routine analysis.

  10. Development and validation of a discriminating in vitro dissolution method for a poorly soluble drug, olmesartan medoxomil: comparison between commercial tablets.

    PubMed

    Bajerski, Lisiane; Rossi, Rochele Cassanta; Dias, Carolina Lupi; Bergold, Ana Maria; Fröehlich, Pedro Eduardo

    2010-06-01

    A dissolution test for tablets containing 40 mg of olmesartan medoxomil (OLM) was developed and validated using both LC-UV and UV methods. After evaluation of the sink condition, dissolution medium, and stability of the drug, the method was validated using USP apparatus 2, 50 rpm rotation speed, and 900 ml of deaerated H(2)O + 0.5% sodium lauryl sulfate (w/v) at pH 6.8 (adjusted with 18% phosphoric acid) as the dissolution medium. The model-independent method using difference factor (f(1)) and similarity factor (f(2)), model-dependent method, and dissolution efficiency were employed to compare dissolution profiles. The kinetic parameters of drug release were also investigated. The obtained results provided adequate dissolution profiles. The developed dissolution test was validated according to international guidelines. Since there is no monograph for this drug in tablets, the dissolution method presented here can be used as a quality control test for OLM in this dosage form, especially in a batch to batch evaluation.

  11. The construct validity of session RPE during an intensive camp in young male Karate athletes

    PubMed Central

    Padulo, Johnny; Chaabène, Helmi; Tabben, Montassar; Haddad, Monoem; Gevat, Cecilia; Vando, Stefano; Maurino, Lucio; Chaouachi, Anis; Chamari, Karim

    2014-01-01

    Summary Background: the aim of this study was to assess the validity of the session rating of perceived exertion (RPE) method and two objective HR-based methods for quantifying karate’s training load (TL) in young Karatekas. Methods: eleven athletes (age 12.50±1.84 years) participated in this study. The training period/camp was performed on 5 consecutive days with two training session (s) per-day (d). Construct validity of RPE method in young Karate athletes, was studied by correlation analysis between RPE session’s training load and both Edwards and Banister’s training impulse score’ method. Results: significant relationship was found between inter-day (n-11 × d-5 × s-2 = 110) sessions RPE and Edwards (r values from 0.84 to 0.92 p < 0.001) and Banister’s (r values from 0.84 to 0.97 p < 0.001), respectively Conclusion: this study showed that session-RPE can be considered a valid method for quantifying karate’s training load in young karate athletes. PMID:25332921

  12. Determination of Phosphorus and Potassium in Commercial Inorganic Fertilizers by Inductively Coupled Plasma-Optical Emission Spectrometry: Single-Laboratory Validation, First Action 2015.18.

    PubMed

    Thiex, Nancy J

    2016-07-01

    A previously validated method for the determination of both citrate-EDTA-soluble P and K and acid-soluble P and K in commercial inorganic fertilizers by inductively coupled plasma-optical emission spectrometry was submitted to the expert review panel (ERP) for fertilizers for consideration of First Action Official Method(SM) status. The ERP evaluated the single-laboratory validation results and recommended the method for First Action Official Method status and provided recommendations for achieving Final Action. Validation materials ranging from 4.4 to 52.4% P2O5 (1.7-22.7% P) and 3-62% K2O (2.5-51.1% K) were used for the validation. Recoveries from validation materials for citrate-soluble P and K ranged from 99.3 to 124.9% P and from 98.4 to 100.7% K. Recoveries from validation materials for acid-soluble "total" P and K ranged from 95.53 to 99.40% P and from 98.36 to 107.28% K. Values of r for citrate-soluble P and K, expressed as RSD, ranged from 0.28 to 1.30% for P and from 0.41 to 1.52% for K. Values of r for total P and K, expressed as RSD, ranged from 0.71 to 1.13% for P and from 0.39 to 1.18% for K. Based on the validation data, the ERP recommended the method (with alternatives for the citrate-soluble and the acid-soluble extractions) for First Action Official Method status and provided recommendations for achieving Final Action status.

  13. Validation of quantitative method for azoxystrobin residues in green beans and peas.

    PubMed

    Abdelraheem, Ehab M H; Hassan, Sayed M; Arief, Mohamed M H; Mohammad, Somaia G

    2015-09-01

    This study presents a method validation for extraction and quantitative analysis of azoxystrobin residues in green beans and peas using HPLC-UV and the results confirmed by GC-MS. The employed method involved initial extraction with acetonitrile after the addition of salts (magnesium sulfate and sodium chloride), followed by a cleanup step by activated neutral carbon. Validation parameters; linearity, matrix effect, LOQ, specificity, trueness and repeatability precision were attained. The spiking levels for the trueness and the precision experiments were (0.1, 0.5, 3 mg/kg). For HPLC-UV analysis, mean recoveries ranged between 83.69% to 91.58% and 81.99% to 107.85% for green beans and peas, respectively. For GC-MS analysis, mean recoveries ranged from 76.29% to 94.56% and 80.77% to 100.91% for green beans and peas, respectively. According to these results, the method has been proven to be efficient for extraction and determination of azoxystrobin residues in green beans and peas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Evaluating the efficiency of spectral resolution of univariate methods manipulating ratio spectra and comparing to multivariate methods: An application to ternary mixture in common cold preparation

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia

    2015-02-01

    Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.

  15. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  16. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A New Green Method for the Quantitative Analysis of Enrofloxacin by Fourier-Transform Infrared Spectroscopy.

    PubMed

    Rebouças, Camila Tavares; Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes

    2018-05-18

    Background: A green analytical chemistry method was developed for quantification of enrofloxacin in tablets. The drug, a second-generation fluoroquinolone, was first introduced in veterinary medicine for the treatment of various bacterial species. Objective: This study proposed to develop, validate, and apply a reliable, low-cost, fast, and simple IR spectroscopy method for quantitative routine determination of enrofloxacin in tablets. Methods: The method was completely validated according to the International Conference on Harmonisation guidelines, showing accuracy, precision, selectivity, robustness, and linearity. Results: It was linear over the concentration range of 1.0-3.0 mg with correlation coefficients >0.9999 and LOD and LOQ of 0.12 and 0.36 mg, respectively. Conclusions: Now that this IR method has met performance qualifications, it can be adopted and applied for the analysis of enrofloxacin tablets for production process control. The validated method can also be utilized to quantify enrofloxacin in tablets and thus is an environmentally friendly alternative for the routine analysis of enrofloxacin in quality control. Highlights: A new green method for the quantitative analysis of enrofloxacin by Fourier-Transform Infrared spectroscopy was validated. It is a fast, clean and low-cost alternative for the evaluation of enrofloxacin tablets.

  18. Determination of Nitrogen, Phosphorus, and Potassium Release Rates of Slow- and Controlled-Release Fertilizers: Single-Laboratory Validation, First Action 2015.15.

    PubMed

    Thiex, Nancy

    2016-01-01

    A previously validated method for the determination of nitrogen release patterns of slow- and controlled-release fertilizers (SRFs and CRFs, respectively) was submitted to the Expert Review Panel (ERP) for Fertilizers for consideration of First Action Official Method(SM) status. The ERP evaluated the single-laboratory validation results and recommended the method for First Action Official Method status and provided recommendations for achieving Final Action. The 180 day soil incubation-column leaching technique was demonstrated to be a robust and reliable method for characterizing N release patterns from SRFs and CRFs. The method was reproducible, and the results were only slightly affected by variations in environmental factors such as microbial activity, soil moisture, temperature, and texture. The release of P and K were also studied, but at fewer replications than for N. Optimization experiments on the accelerated 74 h extraction method indicated that temperature was the only factor found to substantially influence nutrient-release rates from the materials studied, and an optimized extraction profile was established as follows: 2 h at 25°C, 2 h at 50°C, 20 h at 55°C, and 50 h at 60°C.

  19. Solving time-dependent two-dimensional eddy current problems

    NASA Technical Reports Server (NTRS)

    Lee, Min Eig; Hariharan, S. I.; Ida, Nathan

    1988-01-01

    Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.

  20. History and development of the Schmidt-Hunter meta-analysis methods.

    PubMed

    Schmidt, Frank L

    2015-09-01

    In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  2. Global approach for the validation of an in-line Raman spectroscopic method to determine the API content in real-time during a hot-melt extrusion process.

    PubMed

    Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E

    2017-08-15

    Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The ICR96 exon CNV validation series: a resource for orthogonal assessment of exon CNV calling in NGS data.

    PubMed

    Mahamdallie, Shazia; Ruark, Elise; Yost, Shawn; Ramsay, Emma; Uddin, Imran; Wylie, Harriett; Elliott, Anna; Strydom, Ann; Renwick, Anthony; Seal, Sheila; Rahman, Nazneen

    2017-01-01

    Detection of deletions and duplications of whole exons (exon CNVs) is a key requirement of genetic testing. Accurate detection of this variant type has proved very challenging in targeted next-generation sequencing (NGS) data, particularly if only a single exon is involved. Many different NGS exon CNV calling methods have been developed over the last five years. Such methods are usually evaluated using simulated and/or in-house data due to a lack of publicly-available datasets with orthogonally generated results. This hinders tool comparisons, transparency and reproducibility. To provide a community resource for assessment of exon CNV calling methods in targeted NGS data, we here present the ICR96 exon CNV validation series. The dataset includes high-quality sequencing data from a targeted NGS assay (the TruSight Cancer Panel) together with Multiplex Ligation-dependent Probe Amplification (MLPA) results for 96 independent samples. 66 samples contain at least one validated exon CNV and 30 samples have validated negative results for exon CNVs in 26 genes. The dataset includes 46 exon CNVs in BRCA1 , BRCA2 , TP53 , MLH1 , MSH2 , MSH6 , PMS2 , EPCAM or PTEN , giving excellent representation of the cancer predisposition genes most frequently tested in clinical practice. Moreover, the validated exon CNVs include 25 single exon CNVs, the most difficult type of exon CNV to detect. The FASTQ files for the ICR96 exon CNV validation series can be accessed through the European-Genome phenome Archive (EGA) under the accession number EGAS00001002428.

  4. Determination of Chondroitin Sulfate Content in Raw Materials and Dietary Supplements by High-Performance Liquid Chromatography with UV Detection After Enzymatic Hydrolysis: Single-Laboratory Validation First Action 2015.11.

    PubMed

    Brunelle, Sharon L

    2016-01-01

    A previously validated method for determination of chondroitin sulfate in raw materials and dietary supplements was submitted to the AOAC Expert Review Panel (ERP) for Stakeholder Panel on Dietary Supplements Set 1 Ingredients (Anthocyanins, Chondroitin, and PDE5 Inhibitors) for consideration of First Action Official Methods(SM) status. The ERP evaluated the single-laboratory validation results against AOAC Standard Method Performance Requirements 2014.009. With recoveries of 100.8-101.6% in raw materials and 105.4-105.8% in finished products and precision of 0.25-1.8% RSDr within-day and 1.6-4.72% RSDr overall, the ERP adopted the method for First Action Official Methods status and provided recommendations for achieving Final Action status.

  5. Validation of Ion Chromatographic Method for Determination of Standard Inorganic Anions in Treated and Untreated Drinking Water

    NASA Astrophysics Data System (ADS)

    Ivanova, V.; Surleva, A.; Koleva, B.

    2018-06-01

    An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.

  6. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498

  7. Dynamic leg length asymmetry during gait is not a valid method for estimating mild anatomic leg length discrepancy.

    PubMed

    Leporace, Gustavo; Batista, Luiz Alberto; Serra Cruz, Raphael; Zeitoune, Gabriel; Cavalin, Gabriel Armondi; Metsavaht, Leonardo

    2018-03-01

    The purpose of this study was to test the validity of dynamic leg length discrepancy (DLLD) during gait as a radiation-free screening method for measuring anatomic leg length discrepancy (ALLD). Thirty-three subjects with mild leg length discrepancy walked along a walkway and the dynamic leg length discrepancy (DLLD) was calculated using a motion analysis system. Pearson correlation and paired Student t -tests were applied to calculate the correlation and compare the differences between DLLD and ALLD (α = 0.05). The results of our study showed DLLD is not a valid method to predict ALLD in subjects with mild limb discrepancy.

  8. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  9. Worldwide Protein Data Bank validation information: usage and trends.

    PubMed

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  10. Worldwide Protein Data Bank validation information: usage and trends

    PubMed Central

    Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika

    2018-01-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrendsDB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics. PMID:29533231

  11. A new analytical method for quantification of olive and palm oil in blends with other vegetable edible oils based on the chromatographic fingerprints from the methyl-transesterified fraction.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  13. Spectroflurimetric estimation of the new antiviral agent ledipasvir in presence of sofosbuvir

    NASA Astrophysics Data System (ADS)

    Salama, Fathy M.; Attia, Khalid A.; Abouserie, Ahmed A.; El-Olemy, Ahmed; Abolmagd, Ebrahim

    2018-02-01

    A spectroflurimetric method has been developed and validated for the selective quantitative determination of ledipasvir in presence of sofosbuvir. In this method the native fluorescence of ledipasvir in ethanol at 405 nm was measured after excitation at 340 nm. The proposed method was validated according to ICH guidelines and show high sensitivity, accuracy and precision. Furthermore this method was successfully applied to the analysis of ledipasvir in pharmaceutical dosage form without interference from sofosbuvir and other additives and the results were statistically compared to a reported method and found no significant difference.

  14. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    PubMed

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  15. Fecal electrolyte testing for evaluation of unexplained diarrhea: Validation of body fluid test accuracy in the absence of a reference method.

    PubMed

    Voskoboev, Nikolay V; Cambern, Sarah J; Hanley, Matthew M; Giesen, Callen D; Schilling, Jason J; Jannetto, Paul J; Lieske, John C; Block, Darci R

    2015-11-01

    Validation of tests performed on body fluids other than blood or urine can be challenging due to the lack of a reference method to confirm accuracy. The aim of this study was to evaluate alternate assessments of accuracy that laboratories can rely on to validate body fluid tests in the absence of a reference method using the example of sodium (Na(+)), potassium (K(+)), and magnesium (Mg(2+)) testing in stool fluid. Validations of fecal Na(+), K(+), and Mg(2+) were performed on the Roche cobas 6000 c501 (Roche Diagnostics) using residual stool specimens submitted for clinical testing. Spiked recovery, mixing studies, and serial dilutions were performed and % recovery of each analyte was calculated to assess accuracy. Results were confirmed by comparison to a reference method (ICP-OES, PerkinElmer). Mean recoveries for fecal electrolytes were Na(+) upon spiking=92%, mixing=104%, and dilution=105%; K(+) upon spiking=94%, mixing=96%, and dilution=100%; and Mg(2+) upon spiking=93%, mixing=98%, and dilution=100%. When autoanalyzer results were compared to reference ICP-OES results, Na(+) had a slope=0.94, intercept=4.1, and R(2)=0.99; K(+) had a slope=0.99, intercept=0.7, and R(2)=0.99; and Mg(2+) had a slope=0.91, intercept=-4.6, and R(2)=0.91. Calculated osmotic gap using both methods were highly correlated with slope=0.95, intercept=4.5, and R(2)=0.97. Acid pretreatment increased magnesium recovery from a subset of clinical specimens. A combination of mixing, spiking, and dilution recovery experiments are an acceptable surrogate for assessing accuracy in body fluid validations in the absence of a reference method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  16. Measurement of Harm Outcomes in Older Adults after Hospital Discharge: Reliability and Validity

    PubMed Central

    Douglas, Alison; Letts, Lori; Eva, Kevin; Richardson, Julie

    2012-01-01

    Objectives. Defining and validating a measure of safety contributes to further validation of clinical measures. The objective was to define and examine the psychometric properties of the outcome “incidents of harm.” Methods. The Incident of Harm Caregiver Questionnaire was administered to caregivers of older adults discharged from hospital by telephone. Caregivers completed daily logs for one month and medical charts were examined. Results. Test-retest reliability (n = 38) was high for the occurrence of an incident of harm (yes/no; kappa = 1.0) and the type of incident (agreement = 100%). Validation against daily logs found no disagreement regarding occurrence or types of incidents. Validation with medical charts found no disagreement regarding incident occurrence and disagreement in half regarding incident type. Discussion. The data support the Incident of Harm Caregiver Questionnaire as a reliable and valid estimation of incidents for this sample and are important to researchers as a method to measure safety when validating clinical measures. PMID:22649728

  17. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  18. Development and Validation of UV-Visible Spectrophotometric Method for Simultaneous Determination of Eperisone and Paracetamol in Solid Dosage Form.

    PubMed

    Khanage, Shantaram Gajanan; Mohite, Popat Baban; Jadhav, Sandeep

    2013-01-01

    Eperisone Hydrochloride (EPE) is a potent new generation antispasmodic drug which is used in the treatment of moderate to severe pain in combination with Paracetamol (PAR). Both drugs are available in tablet dosage form in combination with a dose of 50 mg for EPE and 325 mg PAR respectively. The method is based upon Q-absorption ratio method for the simultaneous determination of the EPE and PAR. Absorption ratio method is used for the ratio of the absorption at two selected wavelength one of which is the iso-absorptive point and other being the λmax of one of the two components. EPE and PAR shows their iso-absorptive point at 260 nm in methanol, the second wavelength used is 249 nm which is the λmax of PAR in methanol. The linearity was obtained in the concentration range of 5-25 μg/mL for EPE and 2-10 μg/mL for PAR. The proposed method was effectively applied to tablet dosage form for estimation of both drugs. The accuracy and reproducibility results are close to 100% with 2% RSD. RESULTS of the analysis were validated statistically and found to be satisfactory. The results of proposed method have been validated as per ICH guidelines. A simple, precise and economical spectrophotometric method has been developed for the estimation of EPE and PAR in pharmaceutical formulation.

  19. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.

  20. Implementation of the 3Rs (refinement, reduction, and replacement): validation and regulatory acceptance considerations for alternative toxicological test methods.

    PubMed

    Schechtman, Leonard M

    2002-01-01

    Toxicological testing in the current regulatory environment is steeped in a history of using animals to answer questions about the safety of products to which humans are exposed. That history forms the basis for the testing strategies that have evolved to satisfy the needs of the regulatory bodies that render decisions that affect, for the most part, virtually all phases of premarket product development and evaluation and, to a lesser extent, postmarketing surveillance. Only relatively recently have the levels of awareness of, and responsiveness to, animal welfare issues reached current proportions. That paradigm shift, although sluggish, has nevertheless been progressive. New and alternative toxicological methods for hazard evaluation and risk assessment have now been adopted and are being viewed as a means to address those issues in a manner that considers humane treatment of animals yet maintains scientific credibility and preserves the goal of ensuring human safety. To facilitate this transition, regulatory agencies and regulated industry must work together toward improved approaches. They will need assurance that the methods will be reliable and the results comparable with, or better than, those derived from the current classical methods. That confidence will be a function of the scientific validation and resultant acceptance of any given method. In the United States, to fulfill this need, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and its operational center, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), have been constituted as prescribed in federal law. Under this mandate, ICCVAM has developed a process and established criteria for the scientific validation and regulatory acceptance of new and alternative methods. The role of ICCVAM in the validation and acceptance process and the criteria instituted toward that end are described. Also discussed are the participation of the US Food and Drug Administration (FDA) in the ICCVAM process and that agency's approach to the application and implementation of ICCVAM-recommended methods.

  1. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  2. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    PubMed

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  3. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  4. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  5. Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity

    PubMed Central

    Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.

    2015-01-01

    Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951

  6. Reference method for detection of Pgp mediated multidrug resistance in human hematological malignancies: a method validated by the laboratories of the French Drug Resistance Network.

    PubMed

    Huet, S; Marie, J P; Gualde, N; Robert, J

    1998-12-15

    Multidrug resistance (MDR) associated with overexpression of the MDR1 gene and of its product, P-glycoprotein (Pgp), plays an important role in limiting cancer treatment efficacy. Many studies have investigated Pgp expression in clinical samples of hematological malignancies but failed to give definitive conclusion on its usefulness. One convenient method for fluorescent detection of Pgp in malignant cells is flow cytometry which however gives variable results from a laboratory to another one, partly due to the lack of a reference method rigorously tested. The purpose of this technical note is to describe each step of a reference flow cytometric method. The guidelines for sample handling, staining and analysis have been established both for Pgp detection with monoclonal antibodies directed against extracellular epitopes (MRK16, UIC2 and 4E3), and for Pgp functional activity measurement with Rhodamine 123 as a fluorescent probe. Both methods have been validated on cultured cell lines and clinical samples by 12 laboratories of the French Drug Resistance Network. This cross-validated multicentric study points out crucial steps for the accuracy and reproducibility of the results, like cell viability, data analysis and expression.

  7. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    PubMed

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  8. Evidence of Convergent and Discriminant Validity of Child, Teacher, and Peer Reports of Teacher-Student Support

    PubMed Central

    Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan

    2012-01-01

    This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024

  9. Development and Validation of High Performance Liquid Chromatography Method for Determination Atorvastatin in Tablet

    NASA Astrophysics Data System (ADS)

    Yugatama, A.; Rohmani, S.; Dewangga, A.

    2018-03-01

    Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.

  10. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies

    PubMed Central

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-01-01

    Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856

  11. A validated method for rapid determination of dibenzo-p-dioxins/furans (PCDD/Fs), polybrominated diphenyl ethers (PBDEs) and polychlorinated biphenyls (PCBs) in human milk: focus on utility of tandem solid phase extraction (SPE) cleanup.

    PubMed

    Lin, Yuanjie; Feng, Chao; Xu, Qian; Lu, Dasheng; Qiu, Xinlei; Jin, Yu'e; Wang, Guoquan; Wang, Dongli; She, Jianwen; Zhou, Zhijun

    2016-07-01

    An improved method based on tandem solid phase extraction (SPE) cleanup and gas chromatography-high resolution mass spectrometry (GC-HRMS) has been validated for a rapid determination of dibenzo-p-dioxins/furans (PCDD/Fs), dioxin-like polychlorinated biphenyls (PCBs), marker polychlorinated biphenyls (M-PCBs), and polybrominated diphenyl ethers (PBDEs) using a large volume (50 mL) of human milk. This method was well validated for the measurement of these analytes in human milk from the general population with low limits of detection (LODs, 0.004-0.12 ng/g lipid), satisfactory accuracy (75-120 % of recoveries), and precision [less than 10 % of relative standard deviations (RSDs)]. To comprehensively evaluate the performance of this method, a good, presently validated and routinely used method based on an automated sample clean-up system (ASCS, based on the commercial acid multilayer silica, basic alumina, and carbon columns) was used in parallel for comparison. Compared with the ASCS method, this method presented comparable specificity. Additionally, this method, in contrast to ASCS method, highly reduced consumption of solvents (40 mL versus 500 mL), which results in much lower background in the procedural blank, reduced time, and enhanced sample pretreatment throughput. This method was also applied in a pilot study to measure a batch of human milk samples with satisfactory results. Graphical Abstract Characteristics of the application of tandem SPE cleanup for determination of PCDD/Fs, DL-PCBs,M-PCBs and PBDEs in human milk.

  12. Automated finite element meshing of the lumbar spine: Verification and validation with 18 specimen-specific models.

    PubMed

    Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J

    2016-09-06

    The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A collaborative design method to support integrated care. An ICT development method containing continuous user validation improves the entire care process and the individual work situation

    PubMed Central

    Scandurra, Isabella; Hägglund, Maria

    2009-01-01

    Introduction Integrated care involves different professionals, belonging to different care provider organizations and requires immediate and ubiquitous access to patient-oriented information, supporting an integrated view on the care process [1]. Purpose To present a method for development of usable and work process-oriented information and communication technology (ICT) systems for integrated care. Theory and method Based on Human-computer Interaction Science and in particular Participatory Design [2], we present a new collaborative design method in the context of health information systems (HIS) development [3]. This method implies a thorough analysis of the entire interdisciplinary cooperative work and a transformation of the results into technical specifications, via user validated scenarios, prototypes and use cases, ultimately leading to the development of appropriate ICT for the variety of occurring work situations for different user groups, or professions, in integrated care. Results and conclusions Application of the method in homecare of the elderly resulted in an HIS that was well adapted to the intended user groups. Conducted in multi-disciplinary seminars, the method captured and validated user needs and system requirements for different professionals, work situations, and environments not only for current work; it also aimed to improve collaboration in future (ICT supported) work processes. A holistic view of the entire care process was obtained and supported through different views of the HIS for different user groups, resulting in improved work in the entire care process as well as for each collaborating profession [4].

  14. Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang

    2017-01-01

    This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.

  15. Development and validation of an ionic chromatography method for the determination of nitrate, nitrite and chloride in meat.

    PubMed

    Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M

    2016-03-01

    The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Development of models for classification of action between heat-clearing herbs and blood-activating stasis-resolving herbs based on theory of traditional Chinese medicine.

    PubMed

    Chen, Zhao; Cao, Yanfeng; He, Shuaibing; Qiao, Yanjiang

    2018-01-01

    Action (" gongxiao " in Chinese) of traditional Chinese medicine (TCM) is the high recapitulation for therapeutic and health-preserving effects under the guidance of TCM theory. TCM-defined herbal properties (" yaoxing " in Chinese) had been used in this research. TCM herbal property (TCM-HP) is the high generalization and summary for actions, both of which come from long-term effective clinical practice in two thousands of years in China. However, the specific relationship between TCM-HP and action of TCM is complex and unclear from a scientific perspective. The research about this is conducive to expound the connotation of TCM-HP theory and is of important significance for the development of the TCM-HP theory. One hundred and thirty-three herbs including 88 heat-clearing herbs (HCHs) and 45 blood-activating stasis-resolving herbs (BAHRHs) were collected from reputable TCM literatures, and their corresponding TCM-HPs/actions information were collected from Chinese pharmacopoeia (2015 edition). The Kennard-Stone (K-S) algorithm was used to split 133 herbs into 100 calibration samples and 33 validation samples. Then, machine learning methods including supported vector machine (SVM), k-nearest neighbor (kNN) and deep learning methods including deep belief network (DBN), convolutional neutral network (CNN) were adopted to develop action classification models based on TCM-HP theory, respectively. In order to ensure robustness, these four classification methods were evaluated by using the method of tenfold cross validation and 20 external validation samples for prediction. As results, 72.7-100% of 33 validation samples including 17 HCHs and 16 BASRHs were correctly predicted by these four types of methods. Both of the DBN and CNN methods gave out the best results and their sensitivity, specificity, precision, accuracy were all 100.00%. Especially, the predicted results of external validation set showed that the performance of deep learning methods (DBN, CNN) were better than traditional machine learning methods (kNN, SVM) in terms of their sensitivity, specificity, precision, accuracy. Moreover, the distribution patterns of TCM-HPs of HCHs and BASRHs were also analyzed to detect the featured TCM-HPs of these two types of herbs. The result showed that the featured TCM-HPs of HCHs were cold, bitter, liver and stomach meridians entered, while those of BASRHs were warm, bitter and pungent, liver meridian entered. The performance on validation set and external validation set of deep learning methods (DBN, CNN) were better than machine learning models (kNN, SVM) in sensitivity, specificity, precision, accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. The deep learning classification methods owned better generalization ability and accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. Besides, the methods of deep learning would help us to improve our understanding about the relationship between herbal property and action, as well as to enrich and develop the theory of TCM-HP scientifically.

  17. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  18. Validation of the breast evaluation questionnaire for breast hypertrophy and breast reduction.

    PubMed

    Lewin, Richard; Elander, Anna; Lundberg, Jonas; Hansson, Emma; Thorarinsson, Andri; Claudelin, Malin; Bladh, Helena; Lidén, Mattias

    2018-06-13

    There is a lack of published, validated questionnaires for evaluating psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. To validate the breast evaluation questionnaire (BEQ), originally developed for the assessment of breast augmentation patients, for the assessment of psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. Validation study Subjects: Women with macromastia Methods: The validation of the BEQ, adapted to breast reduction, was performed in several steps. Content validity, reliability, construct validity and responsiveness were assessed. The original version was adjusted according to the results for content validity and resulted in item reduction and a modified BEQ (mBEQ) that was then assessed for reliability, construct validity and responsiveness. Internal and external validation was performed for the modified BEQ. Convergent validity was tested against Breast-Q (reduction) and discriminate validity was tested against the SF-36. Known-groups validation revealed significant differences between the normal population and patients undergoing breast reduction surgery. The BEQ showed good reliability by test-re-test analysis and high responsiveness. The modified BEQ may be reliable, valid and responsive instrument for assessing women who undergo breast reduction.

  19. Monitoring of platinum surface contamination in seven Dutch hospital pharmacies using inductively coupled plasma mass spectrometry

    PubMed Central

    Huitema, A. D. R.; Bakker, E. N.; Douma, J. W.; Schimmel, K. J. M.; van Weringh, G.; de Wolf, P. J.; Schellens, J. H. M.; Beijnen, J. H.

    2007-01-01

    Objective: To develop, validate, and apply a method for the determination of platinum contamination, originating from cisplatinum, oxaliplatinum, and carboplatinum. Methods: Inductively coupled plasma mass spectrometry (ICP-MS) was used to determine platinum in wipe samples. The sampling procedure and the analytical conditions were optimised and the assay was validated. The method was applied to measure surface contamination in seven Dutch hospital pharmacies. Results: The developed method allowed reproducible quantification of 0.50 ng l−1 platinum (5 pg/wipe sample). Recoveries for stainless steel and linoleum surfaces ranged between 50.4 and 81.4% for the different platinum compounds tested. Platinum contamination was reported in 88% of the wipe samples. Although a substantial variation in surface contamination of the pharmacies was noticed, in most pharmacies, the laminar-airflow (LAF) hoods, the floor in front of the LAF hoods, door handles, and handles of service hatches showed positive results. This demonstrates that contamination is spread throughout the preparation rooms. Conclusion: We developed and validated an ultra sensitive and reliable ICP-MS method for the determination of platinum in surface samples. Surface contamination with platinum was observed in all hospital pharmacies sampled. The interpretation of these results is, however, complicated. PMID:17377802

  20. An ultraviolet-spectrophotometric method for the determination of glimepiride in solid dosage forms.

    PubMed

    Afieroho, Ozadheoghene E; Okorie, Ogbonna; Okonkwo, Tochukwu J N

    2011-06-01

    Considering the cost of acquiring a liquid chromatographic instrument in underdeveloped economies, the rising incidence of diabetes mellitus, the need to evaluate the quality performance of glimepiride generics, and the need for less toxic processes, this research is an imperative. The method was validated for linearity, recovery accuracy, intra- and inter-day precision, specificity in the presence of excipients, and inter-day stability under laboratory conditions. Student's t test at the 95% confidence limit was used for statistics. Using 96% ethanol as solvent, a less toxic and cost-effective spectrophotometric method for the determination of glimepiride in solid dosage forms was developed and validated. The results of the validated parameters showed a λ(max) of 231 nm, linearity range of 0.5-22 μg/mL, precision with relative SD of <1.0%, recovery accuracy of 100.8%, regression equation of y = 45.741x + 0.0202, R(2) = 0.999, limit of detection of 0.35 μg/mL, and negligible interference from common excipients and colorants. The method was found to be accurate at the 95% confidence limit compared with the standard liquid chromatographic method with comparable reproducibility when used to assay the formulated products Amaryl(®) (sanofi-aventis, Paris, France) and Mepyril(®) (May & Baker Nigeria PLC, Ikeja, Nigeria). The results obtained for the validated parameters were within allowable limits. This method is recommended for routine quality control analysis.

  1. Validation and long-term evaluation of a modified on-line chiral analytical method for therapeutic drug monitoring of (R,S)-methadone in clinical samples.

    PubMed

    Ansermot, Nicolas; Rudaz, Serge; Brawand-Amey, Marlyse; Fleury-Souverain, Sandrine; Veuthey, Jean-Luc; Eap, Chin B

    2009-08-01

    Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.

  2. Validation Thin Layer Chromatography for the Determination of Acetaminophen in Tablets and Comparison with a Pharmacopeial Method

    PubMed Central

    Pyka, Alina; Budzisz, Marika; Dołowy, Małgorzata

    2013-01-01

    Adsorption thin layer chromatography (NP-TLC) with densitometry has been established for the identification and the quantification of acetaminophen in three leading commercial products of pharmaceutical tablets coded as brand: P1 (Product no. 1), P2 (Product no. 2), and P3 (Product no. 3). Applied chromatographic conditions have separated acetaminophen from its related substances, namely, 4-aminophenol and and 4′-chloroacetanilide. UV densitometry was performed in absorbance mode at 248 nm. The presented method was validated by specificity, range, linearity, accuracy, precision, detection limit, quantitative limit, and robustness. The TLC-densitometric method was also compared with a pharmacopeial UV-spectrophotometric method for the assay of acetaminophen, and the results confirmed statistically that the NP-TLC-densitometric method can be used as a substitute method. It could be said that the validated NP-TLC-densitometric method is suitable for the routine analysis of acetaminophen in quantity control laboratories. PMID:24063006

  3. QUANTIFICATION OF GLYCYRRHIZIN BIOMARKER IN GLYCYRRHIZA GLABRA RHIZOME AND BABY HERBAL FORMULATIONS BY VALIDATED RP-HPTLC METHODS

    PubMed Central

    Alam, Prawez; Foudah, Ahmed I.; Zaatout, Hala H.; T, Kamal Y; Abdel-Kader, Maged S.

    2017-01-01

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 silica gel 60 F254S HPTLC plates using methanol-water (7: 3 v/v) as mobile phase. Results: The developed plate was scanned and quantified densitometrically at 256 nm. Glycyrrhizin peaks from Glycyrrhiza glabra rhizome and baby herbal formulations were identified by comparing their single spot at Rf = 0.63 ± 0.01. Linear regression analysis revealed a good linear relationship between peak area and amount of glycyrrhizin in the range of 2000-7000 ng/band. Conclusion: The method was validated, in accordance with ICH guidelines for precision, accuracy, and robustness. The proposed method will be useful to enumerate the therapeutic dose of glycyrrhizin in herbal formulations as well as in bulk drug. PMID:28573236

  4. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  5. Precision of dehydroascorbic acid quantitation with the use of the subtraction method--validation of HPLC-DAD method for determination of total vitamin C in food.

    PubMed

    Mazurek, Artur; Jamroz, Jerzy

    2015-04-15

    In food analysis, a method for determination of vitamin C should enable measuring of total content of ascorbic acid (AA) and dehydroascorbic acid (DHAA) because both chemical forms exhibit biological activity. The aim of the work was to confirm applicability of HPLC-DAD method for analysis of total content of vitamin C (TC) and ascorbic acid in various types of food by determination of validation parameters such as: selectivity, precision, accuracy, linearity and limits of detection and quantitation. The results showed that the method applied for determination of TC and AA was selective, linear and precise. Precision of DHAA determination by the subtraction method was also evaluated. It was revealed that the results of DHAA determination obtained by the subtraction method were not precise which resulted directly from the assumption of this method and the principles of uncertainty propagation. The proposed chromatographic method should be recommended for routine determinations of total vitamin C in various food. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Möglichkeiten und Grenzen der Validierung flächenhaft modellierter Nitrateinträge ins Grundwasser mit der N2/Ar-Methode

    NASA Astrophysics Data System (ADS)

    Eschenbach, Wolfram; Budziak, Dörte; Elbracht, Jörg; Höper, Heinrich; Krienen, Lisa; Kunkel, Ralf; Meyer, Knut; Well, Reinhard; Wendland, Frank

    2018-06-01

    Valid models for estimating nitrate emissions from agriculture to groundwater are an indispensable forecasting tool. A major challenge for model validation is the spatial and temporal inconsistency between data from groundwater monitoring points and modelled nitrate inputs into groundwater, and the fact that many existing groundwater monitoring wells cannot be used for validation. With the help of the N2/Ar-method, groundwater monitoring wells in areas with reduced groundwater can now be used for model validation. For this purpose, 484 groundwater monitoring wells were sampled in Lower Saxony. For the first time, modelled potential nitrate concentrations in groundwater recharge (from the DENUZ model) were compared with nitrate input concentrations, which were calculated using the N2/Ar method. The results show a good agreement between both methods for glacial outwash plains and moraine deposits. Although the nitrate degradation processes in groundwater and soil merge seamlessly in areas with a shallow groundwater table, the DENUZ model only calculates denitrification in the soil zone. The DENUZ model thus predicts 27% higher nitrate emissions into the groundwater than the N2/Ar method in such areas. To account for high temporal and spatial variability of nitrate emissions into groundwater, a large number of groundwater monitoring points must be investigated for model validation.

  8. Validation of a HLA-A2 tetramer flow cytometric method, IFNgamma real time RT-PCR, and IFNgamma ELISPOT for detection of immunologic response to gp100 and MelanA/MART-1 in melanoma patients

    PubMed Central

    Xu, Yuanxin; Theobald, Valerie; Sung, Crystal; DePalma, Kathleen; Atwater, Laura; Seiger, Keirsten; Perricone, Michael A; Richards, Susan M

    2008-01-01

    Background HLA-A2 tetramer flow cytometry, IFNγ real time RT-PCR and IFNγ ELISPOT assays are commonly used as surrogate immunological endpoints for cancer immunotherapy. While these are often used as research assays to assess patient's immunologic response, assay validation is necessary to ensure reliable and reproducible results and enable more accurate data interpretation. Here we describe a rigorous validation approach for each of these assays prior to their use for clinical sample analysis. Methods Standard operating procedures for each assay were established. HLA-A2 (A*0201) tetramer assay specific for gp100209(210M) and MART-126–35(27L), IFNγ real time RT-PCR and ELISPOT methods were validated using tumor infiltrating lymphocyte cell lines (TIL) isolated from HLA-A2 melanoma patients. TIL cells, specific for gp100 (TIL 1520) or MART-1 (TIL 1143 and TIL1235), were used alone or spiked into cryopreserved HLA-A2 PBMC from healthy subjects. TIL/PBMC were stimulated with peptides (gp100209, gp100pool, MART-127–35, or influenza-M1 and negative control peptide HIV) to further assess assay performance characteristics for real time RT-PCR and ELISPOT methods. Validation parameters included specificity, accuracy, precision, linearity of dilution, limit of detection (LOD) and limit of quantification (LOQ). In addition, distribution was established in normal HLA-A2 PBMC samples. Reference ranges for assay controls were established. Results The validation process demonstrated that the HLA-A2 tetramer, IFNγ real time RT-PCR, and IFNγ ELISPOT were highly specific for each antigen, with minimal cross-reactivity between gp100 and MelanA/MART-1. The assays were sensitive; detection could be achieved at as few as 1/4545–1/6667 cells by tetramer analysis, 1/50,000 cells by real time RT-PCR, and 1/10,000–1/20,000 by ELISPOT. The assays met criteria for precision with %CV < 20% (except ELISPOT using high PBMC numbers with %CV < 25%) although flow cytometric assays and cell based functional assays are known to have high assay variability. Most importantly, assays were demonstrated to be effective for their intended use. A positive IFNγ response (by RT-PCR and ELISPOT) to gp100 was demonstrated in PBMC from 3 melanoma patients. Another patient showed a positive MART-1 response measured by all 3 validated methods. Conclusion Our results demonstrated the tetramer flow cytometry assay, IFNγ real-time RT-PCR, and INFγ ELISPOT met validation criteria. Validation approaches provide a guide for others in the field to validate these and other similar assays for assessment of patient T cell response. These methods can be applied not only to cancer vaccines but to other therapeutic proteins as part of immunogenicity and safety analyses. PMID:18945350

  9. [Validation Study for Analytical Method of Diarrhetic Shellfish Poisons in 9 Kinds of Shellfish].

    PubMed

    Yamaguchi, Mizuka; Yamaguchi, Takahiro; Kakimoto, Kensaku; Nagayoshi, Haruna; Okihashi, Masahiro; Kajimura, Keiji

    2016-01-01

    A method was developed for the simultaneous determination of okadaic acid, dinophysistoxin-1 and dinophysistoxin-2 in shellfish using ultra performance liquid chromatography with tandem mass spectrometry. Shellfish poisons in spiked samples were extracted with methanol and 90% methanol, and were hydrolyzed with 2.5 mol/L sodium hydroxide solution. Purification was done on an HLB solid-phase extraction column. This method was validated in accordance with the notification of Ministry of Health, Labour and Welfare of Japan. As a result of the validation study in nine kinds of shellfish, the trueness, repeatability and within-laboratory reproducibility were 79-101%, less than 12 and 16%, respectively. The trueness and precision met the target values of notification.

  10. Interlaboratory study of a liquid chromatography method for erythromycin: determination of uncertainty.

    PubMed

    Dehouck, P; Vander Heyden, Y; Smeyers-Verbeke, J; Massart, D L; Marini, R D; Chiap, P; Hubert, Ph; Crommen, J; Van de Wauw, W; De Beer, J; Cox, R; Mathieu, G; Reepmeyer, J C; Voigt, B; Estevenon, O; Nicolas, A; Van Schepdael, A; Adams, E; Hoogmartens, J

    2003-08-22

    Erythromycin is a mixture of macrolide antibiotics produced by Saccharopolyspora erythreas during fermentation. A new method for the analysis of erythromycin by liquid chromatography has previously been developed. It makes use of an Astec C18 polymeric column. After validation in one laboratory, the method was now validated in an interlaboratory study. Validation studies are commonly used to test the fitness of the analytical method prior to its use for routine quality testing. The data derived in the interlaboratory study can be used to make an uncertainty statement as well. The relationship between validation and uncertainty statement is not clear for many analysts and there is a need to show how the existing data, derived during validation, can be used in practice. Eight laboratories participated in this interlaboratory study. The set-up allowed the determination of the repeatability variance, s(2)r and the between-laboratory variance, s(2)L. Combination of s(2)r and s(2)L results in the reproducibility variance s(2)R. It has been shown how these data can be used in future by a single laboratory that wants to make an uncertainty statement concerning the same analysis.

  11. HPLC-MS/MS method for dexmedetomidine quantification with Design of Experiments approach: application to pediatric pharmacokinetic study.

    PubMed

    Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta

    2017-02-01

    The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.

  12. Validation on milk and sprouts of EN ISO 16654:2001 - Microbiology of food and animal feeding stuffs - Horizontal method for the detection of Escherichia coli O157.

    PubMed

    Tozzoli, Rosangela; Maugliani, Antonella; Michelacci, Valeria; Minelli, Fabio; Caprioli, Alfredo; Morabito, Stefano

    2018-05-08

    In 2006, the European Committee for standardisation (CEN)/Technical Committee 275 - Food analysis - Horizontal methods/Working Group 6 - Microbiology of the food chain (TC275/WG6), launched the project of validating the method ISO 16654:2001 for the detection of Escherichia coli O157 in foodstuff by the evaluation of its performance, in terms of sensitivity and specificity, through collaborative studies. Previously, a validation study had been conducted to assess the performance of the Method No 164 developed by the Nordic Committee for Food Analysis (NMKL), which aims at detecting E. coli O157 in food as well, and is based on a procedure equivalent to that of the ISO 16654:2001 standard. Therefore, CEN established that the validation data obtained for the NMKL Method 164 could be exploited for the ISO 16654:2001 validation project, integrated with new data obtained through two additional interlaboratory studies on milk and sprouts, run in the framework of the CEN mandate No. M381. The ISO 16654:2001 validation project was led by the European Union Reference Laboratory for Escherichia coli including VTEC (EURL-VTEC), which organized the collaborative validation study on milk in 2012 with 15 participating laboratories and that on sprouts in 2014, with 14 participating laboratories. In both studies, a total of 24 samples were tested by each laboratory. Test materials were spiked with different concentration of E. coli O157 and the 24 samples corresponded to eight replicates of three levels of contamination: zero, low and high spiking level. The results submitted by the participating laboratories were analyzed to evaluate the sensitivity and specificity of the ISO 16654:2001 method when applied to milk and sprouts. The performance characteristics calculated on the data of the collaborative validation studies run under the CEN mandate No. M381 returned sensitivity and specificity of 100% and 94.4%, respectively for the milk study. As for sprouts matrix, the sensitivity resulted in 75.9% in the low level of contamination samples and 96.4% in samples spiked with high level of E. coli O157 and specificity was calculated as 99.1%. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1313 and Method 1316

    EPA Science Inventory

    This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...

  14. Measuring benefits and patients' satisfaction when glasses are not needed after cataract and presbyopia surgery: scoring and psychometric validation of the Freedom from Glasses Value Scale (FGVS©)

    PubMed Central

    2010-01-01

    Background The purpose of this study was to reduce the number of items, create a scoring method and assess the psychometric properties of the Freedom from Glasses Value Scale (FGVS), which measures benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal intraocular lens (IOL) surgery. Methods The 21-item FGVS, developed simultaneously in French and Spanish, was administered by phone during an observational study to 152 French and 152 Spanish patients who had undergone cataract or presbyopia surgery at least 1 year before the study. Reduction of items and creation of the scoring method employed statistical methods (principal component analysis, multitrait analysis) and content analysis. Psychometric properties (validation of the structure, internal consistency reliability, and known-group validity) of the resulting version were assessed in the pooled population and per country. Results One item was deleted and 3 were kept but not aggregated in a dimension. The other 17 items were grouped into 2 dimensions ('global evaluation', 9 items; 'advantages', 8 items) and divided into 5 sub-dimensions, with higher scores indicating higher benefit of surgery. The structure was validated (good item convergent and discriminant validity). Internal consistency reliability was good for all dimensions and sub-dimensions (Cronbach's alphas above 0.70). The FGVS was able to discriminate between patients wearing glasses or not after surgery (higher scores for patients not wearing glasses). FGVS scores were significantly higher in Spain than France; however, the measure had similar psychometric performances in both countries. Conclusions The FGVS is a valid and reliable instrument measuring benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal IOL surgery. PMID:20497555

  15. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    PubMed

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  16. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  17. Determination of calcium, magnesium, sodium, and potassium in foodstuffs by using a microsampling flame atomic absorption spectrometric method after closed-vessel microwave digestion: method validation.

    PubMed

    Chekri, Rachida; Noël, Laurent; Vastel, Christelle; Millour, Sandrine; Kadar, Ali; Guérin, Thierry

    2010-01-01

    This paper describes a validation process in compliance with the NFIEN ISO/IEC 17025 standard for the determination of the macrominerals calcium, magnesium, sodium, and potassium in foodstuffs by microsampling with flame atomic absorption spectrometry after closed-vessel microwave digestion. The French Standards Commission (Agence Francaise de Normalisation) standards NF V03-110, NF EN V03-115, and XP T-90-210 were used to evaluate this method. The method was validated in the context of an analysis of the 1322 food samples of the second French Total Diet Study (TDS). Several performance criteria (linearity, LOQ, specificity, trueness, precision under repeatability conditions, and intermediate precision reproducibility) were evaluated. Furthermore, the method was monitored by several internal quality controls. The LOQ values obtained (25, 5, 8.3, and 8.3 mg/kg for Ca, Mg, Na, and K, respectively) were in compliance with the needs of the TDS. The method provided accurate results as demonstrated by a repeatability CV (CVr) of < 7% and a reproducibility CV (CVR) of < 12% for all the elements. Therefore, the results indicated that this method could be used in the laboratory for the routine determination of these four elements in foodstuffs with acceptable analytical performance.

  18. Ultrasensitive NIR-SERRS Probes with Multiplexed Ratiometric Quantification for In Vivo Antibody Leads Validation.

    PubMed

    Kang, Homan; Jeong, Sinyoung; Jo, Ahla; Chang, Hyejin; Yang, Jin-Kyoung; Jeong, Cheolhwan; Kyeong, San; Lee, Youn Woo; Samanta, Animesh; Maiti, Kaustabh Kumar; Cha, Myeong Geun; Kim, Taek-Keun; Lee, Sukmook; Jun, Bong-Hyun; Chang, Young-Tae; Chung, Junho; Lee, Ho-Young; Jeong, Dae Hong; Lee, Yoon-Sik

    2018-02-01

    Immunotargeting ability of antibodies may show significant difference between in vitro and in vivo. To select antibody leads with high affinity and specificity, it is necessary to perform in vivo validation of antibody candidates following in vitro antibody screening. Herein, a robust in vivo validation of anti-tetraspanin-8 antibody candidates against human colon cancer using ratiometric quantification method is reported. The validation is performed on a single mouse and analyzed by multiplexed surface-enhanced Raman scattering using ultrasensitive and near infrared (NIR)-active surface-enhanced resonance Raman scattering nanoprobes (NIR-SERRS dots). The NIR-SERRS dots are composed of NIR-active labels and Au/Ag hollow-shell assembled silica nanospheres. A 93% of NIR-SERRS dots is detectable at a single-particle level and signal intensity is 100-fold stronger than that from nonresonant molecule-labeled spherical Au NPs (80 nm). The result of SERRS-based antibody validation is comparable to that of the conventional method using single-photon-emission computed tomography. The NIR-SERRS-based strategy is an alternate validation method which provides cost-effective and accurate multiplexing measurements for antibody-based drug development. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Elaboration and validation of the method for the quantification of the emetic toxin of Bacillus cereus as described in EN-ISO 18465 - Microbiology of the food chain - Quantitative determination of emetic toxin (cereulide) using LC-MS/MS.

    PubMed

    In 't Veld, P H; van der Laak, L F J; van Zon, M; Biesta-Peters, E G

    2018-04-12

    A method for the quantification of the Bacillus cereus emetic toxin (cereulide) was developed and validated. The method principle is based on LC-MS as this is the most sensitive and specific method for cereulide. Therefore the study design is different from the microbiological methods validated under this mandate. As the method had to be developed a two stage validation study approach was used. The first stage (pre-study) focussed on the method applicability and the experience of the laboratories with the method. Based on the outcome of the pre-study and comments received during voting at CEN and ISO level a final method was agreed to be used for the second stage the (final) validation of the method. In the final (validation) study samples of cooked rice (both artificially contaminated with cereulide or contaminated with B. cereus for production of cereulide in the rice) and 6 other food matrices (fried rice dish, cream pastry with chocolate, hotdog sausage, mini pancakes, vanilla custard and infant formula) were used. All these samples were spiked by the participating laboratories using standard solutions of cereulide supplied by the organising laboratory. The results of the study indicate that the method is fit for purpose. Repeatability values were obtained of 0.6 μg/kg at low level spike (ca. 5 μg/kg) and 7 to 9.6 μg/kg at high level spike (ca. 75 μg/kg). Reproducibility at low spike level ranged from 0.6 to 0.9 μg/kg and from 8.7 to 14.5 μg/kg at high spike level. Recovery from the spiked samples ranged between 96.5% for mini-pancakes to 99.3% for fries rice dish. Copyright © 2018. Published by Elsevier B.V.

  20. [Simultaneous quantitative analysis of five alkaloids in Sophora flavescens by multi-components assay by single marker].

    PubMed

    Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang

    2013-05-01

    To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.

  1. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  2. Using Experiential Methods To Teach about Measurement Validity.

    ERIC Educational Resources Information Center

    Alderfer, Clayton P.

    2003-01-01

    Indirectly, instructor behavior provided two models for using a multitrait-multimethod matrix. Students who formulated their own concept, created empirical indicators, and assessed convergent and discriminant validity had better results than those who, influenced by classroom authority dynamics, followed a poorly formulated concept with a…

  3. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  4. Method validation for simultaneous determination of chromium, molybdenum and selenium in infant formulas by ICP-OES and ICP-MS.

    PubMed

    Khan, Naeem; Jeong, In Seon; Hwang, In Min; Kim, Jae Sung; Choi, Sung Hwa; Nho, Eun Yeong; Choi, Ji Yeon; Kwak, Byung-Man; Ahn, Jang-Hyuk; Yoon, Taehyung; Kim, Kyong Su

    2013-12-15

    This study aimed to validate the analytical method for simultaneous determination of chromium (Cr), molybdenum (Mo), and selenium (Se) in infant formulas available in South Korea. Various digestion methods of dry-ashing, wet-digestion and microwave were evaluated for samples preparation and both inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS) were compared for analysis. The analytical techniques were validated by detection limits, precision, accuracy and recovery experiments. Results showed that wet-digestion and microwave methods were giving satisfactory results for sample preparation, while ICP-MS was found more sensitive and effective technique than ICP-OES. The recovery (%) of Se, Mo and Cr by ICP-OES were 40.9, 109.4 and 0, compared to 99.1, 98.7 and 98.4, respectively by ICP-MS. The contents of Cr, Mo and Se in infant formulas by ICP-MS were found in good nutritional values in accordance to nutrient standards for infant formulas CODEX values. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. IMPLEMENTATION AND VALIDATION OF A FULLY IMPLICIT ACCUMULATOR MODEL IN RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    2016-01-01

    This paper presents the implementation and validation of an accumulator model in RELAP-7 under the framework of preconditioned Jacobian free Newton Krylov (JFNK) method, based on the similar model used in RELAP5. RELAP-7 is a new nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). RELAP-7 is a fully implicit system code. The JFNK and preconditioning methods used in RELAP-7 is briefly discussed. The slightly modified accumulator model is summarized for completeness. The implemented model was validated with LOFT L3-1 test and benchmarked with RELAP5 results. RELAP-7 and RELAP5 had almost identical results for themore » accumulator gas pressure and water level, although there were some minor difference in other parameters such as accumulator gas temperature and tank wall temperature. One advantage of the JFNK method is its easiness to maintain and modify models due to fully separation of numerical methods from physical models. It would be straightforward to extend the current RELAP-7 accumulator model to simulate the advanced accumulator design.« less

  6. Development and Validation of a Multiplexed Protein Quantitation Assay for the Determination of Three Recombinant Proteins in Soybean Tissues by Liquid Chromatography with Tandem Mass Spectrometry.

    PubMed

    Hill, Ryan C; Oman, Trent J; Shan, Guomin; Schafer, Barry; Eble, Julie; Chen, Cynthia

    2015-08-26

    Currently, traditional immunochemistry technologies such as enzyme-linked immunosorbent assays (ELISA) are the predominant analytical tool used to measure levels of recombinant proteins expressed in genetically engineered (GE) plants. Recent advances in agricultural biotechnology have created a need to develop methods capable of selectively detecting and quantifying multiple proteins in complex matrices because of increasing numbers of transgenic proteins being coexpressed or "stacked" to achieve tolerance to multiple herbicides or to provide multiple modes of action for insect control. A multiplexing analytical method utilizing liquid chromatography with tandem mass spectrometry (LC-MS/MS) has been developed and validated to quantify three herbicide-tolerant proteins in soybean tissues: aryloxyalkanoate dioxygenase (AAD-12), 5-enol-pyruvylshikimate-3-phosphate synthase (2mEPSPS), and phosphinothricin acetyltransferase (PAT). Results from the validation showed high recovery and precision over multiple analysts and laboratories. Results from this method were comparable to those obtained with ELISA with respect to protein quantitation, and the described method was demonstrated to be suitable for multiplex quantitation of transgenic proteins in GE crops.

  7. Optimization and single-laboratory validation of a method for the determination of flavonolignans in milk thistle seeds by high-performance liquid chromatography with ultraviolet detection.

    PubMed

    Mudge, Elizabeth; Paley, Lori; Schieber, Andreas; Brown, Paula N

    2015-10-01

    Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86% in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88% RSDr, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8%. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International.

  8. Reliability and Validity of Modified Service Quality Instrument (SERVQUAL) in Patients’ Motivation to Adhere to Insulin Therapy

    PubMed Central

    Jakupovic, Vedran; Solakovic, Suajb; Celebic, Nedim; Kulovic, Dzenan

    2018-01-01

    Introduction: Diabetes is progressive condition which requires various ways of treatment. Adequate therapy prescribed in the right time helps patient to postpone development of complications. Adherence to complicated therapy is challenge for both patients and HCPs and is subject of research in many disciplines. Improvement in communication between HCP and patients is very important in patient’s adherence to therapy. Aim: Aim of this research was to explore validity and reliability of modified SERVQUAL instrument in attempt to explore ways of motivating diabetic patient to accept prescribed insulin therapy. Material and Methods: We used modified SERVQUAL questionnaire as instrument in the research. It was necessary to check validity and reliability of the new modified instrument. Results: Results show that modified Servqual instrument has excellent reliability (α=0.908), so we could say that it measures precisely Expectations, Perceptions and Motivation at patients. Factor analysis (EFA method) with Varimax rotation extracted 4 factors which together explain 52.902% variance of the results on this subscale. Bifactorial solution could be seen on Scree-plot diagram (break at second factor). Conclusion: Results in this research show that modified Servqual instrument which is created in order to measure expectations and perceptions of the patients is valid and reliable. Reliability and validity are proven indeed in additional dimension which was created originally for this research - motivation to accept insulin therapy. PMID:29670478

  9. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and Californiamore » to compare modeled results to actual on-the-ground measurements.« less

  10. Item Development and Validity Testing for a Self- and Proxy Report: The Safe Driving Behavior Measure

    PubMed Central

    Classen, Sherrilene; Winter, Sandra M.; Velozo, Craig A.; Bédard, Michel; Lanford, Desiree N.; Brumback, Babette; Lutz, Barbara J.

    2010-01-01

    OBJECTIVE We report on item development and validity testing of a self-report older adult safe driving behaviors measure (SDBM). METHOD On the basis of theoretical frameworks (Precede–Proceed Model of Health Promotion, Haddon’s matrix, and Michon’s model), existing driving measures, and previous research and guided by measurement theory, we developed items capturing safe driving behavior. Item development was further informed by focus groups. We established face validity using peer reviewers and content validity using expert raters. RESULTS Peer review indicated acceptable face validity. Initial expert rater review yielded a scale content validity index (CVI) rating of 0.78, with 44 of 60 items rated ≥0.75. Sixteen unacceptable items (≤0.5) required major revision or deletion. The next CVI scale average was 0.84, indicating acceptable content validity. CONCLUSION The SDBM has relevance as a self-report to rate older drivers. Future pilot testing of the SDBM comparing results with on-road testing will define criterion validity. PMID:20437917

  11. DEVELOPMENT AND VALIDATION OF BROMATOMETRIC, DIAZOTIZATION AND VIS-SPECTROPHOTOMETRIC METHODS FOR THE DETERMINATION OF MESALAZINE IN PHARMACEUTICAL FORMULATION.

    PubMed

    Zawada, Elzabieta; Pirianowicz-Chaber, Elzabieta; Somogi, Aleksander; Pawinski, Tomasz

    2017-03-01

    Three new methods were developed for the quantitative determination of mesalazine in the form of the pure substance or in the form of suppositories and tablets - accordingly: bromatometric, diazotization and visible light spectrophotometry method. Optimizing the time and the temperature of the bromination reaction (50⁰C, 50 min) 4-amino-2,3,5,6-tetrabromophenol was obtained. The results obtained were reproducible, accurate and precise. Developed methods were compared to the pharmacopoeial approach - alkalimetry in an aqueous medium. The validation parameters of all methods were comparable. Developed methods for quantification of mesalazine are a viable alternative to other more expensive approaches.

  12. Postcraniometric sex and ancestry estimation in South Africa: a validation study.

    PubMed

    Liebenberg, Leandi; Krüger, Gabriele C; L'Abbé, Ericka N; Stull, Kyra E

    2018-05-24

    With the acceptance of the Daubert criteria as the standards for best practice in forensic anthropological research, more emphasis is being placed on the validation of published methods. Methods, both traditional and novel, need to be validated, adjusted, and refined for optimal performance within forensic anthropological analyses. Recently, a custom postcranial database of modern South Africans was created for use in Fordisc 3.1. Classification accuracies of up to 85% for ancestry estimation and 98% for sex estimation were achieved using a multivariate approach. To measure the external validity and report more realistic performance statistics, an independent sample was tested. The postcrania from 180 black, white, and colored South Africans were measured and classified using the custom postcranial database. A decrease in accuracy was observed for both ancestry estimation (79%) and sex estimation (95%) of the validation sample. When incorporating both sex and ancestry simultaneously, the method achieved 70% accuracy, and 79% accuracy when sex-specific ancestry analyses were run. Classification matrices revealed that postcrania were more likely to misclassify as a result of ancestry rather than sex. While both sex and ancestry influence the size of an individual, sex differences are more marked in the postcranial skeleton and are therefore easier to identify. The external validity of the postcranial database was verified and therefore shown to be a useful tool for forensic casework in South Africa. While the classification rates were slightly lower than the original method, this is expected when a method is generalized.

  13. Validation of a multi-residue method for the determination of several antibiotic groups in honey by LC-MS/MS.

    PubMed

    Bohm, Detlef A; Stachel, Carolin S; Gowik, Petra

    2012-07-01

    The presented multi-method was developed for the confirmation of 37 antibiotic substances from the six antibiotic groups: macrolides, lincosamides, quinolones, tetracyclines, pleuromutilines and diamino-pyrimidine derivatives. All substances were analysed simultaneously in a single analytical run with the same procedure, including an extraction with buffer, a clean-up by solid-phase extraction, and the measurement by liquid chromatography tandem mass spectrometry in ESI+ mode. The method was validated on the basis of an in-house validation concept with factorial design by combination of seven factors to check the robustness in a concentration range of 5-50 μg kg(-1). The honeys used were of different types with regard to colour and origin. The values calculated for the validation parameters-decision limit CCα (range, 7.5-12.9 μg kg(-1)), detection capability CCβ (range, 9.4-19.9 μg kg(-1)), within-laboratory reproducibility RSD(wR) (<20% except for tulathromycin with 23.5% and tylvalosin with 21.4 %), repeatability RSD(r) (<20% except for tylvalosin with 21.1%), and recovery (range, 92-106%)-were acceptable and in agreement with the criteria of Commission Decision 2002/657/EC. The validation results showed that the method was applicable for the residue analysis of antibiotics in honey to substances with and without recommended concentrations, although some changes had been tested during validation to determine the robustness of the method.

  14. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  15. Robust d -wave pairing symmetry in multiorbital cobalt high-temperature superconductors

    NASA Astrophysics Data System (ADS)

    Li, Yinxiang; Han, Xinloong; Qin, Shengshan; Le, Congcong; Wang, Qiang-Hua; Hu, Jiangping

    2017-07-01

    The pairing symmetry of the cobalt high-temperature (high-Tc) superconductors formed by vertex-shared cation-anion tetrahedral complexes is studied by the methods of mean-field, random phase approximation (RPA), and functional renormalization-group (FRG) analyses. The results of all of these methods show that the dx2-y2 pairing symmetry is robustly favored near half filling. The RPA and FRG methods, which are valid in weak-interaction regions, predict that the superconducting state is also strongly orbital selective, namely, the dx2-y2 orbital that has the largest density near half filling among the three t2 g orbitals dominates superconducting pairing. These results suggest that these materials, if synthesized, can provide an indisputable test of the high-Tc pairing mechanism and the validity of different theoretical methods.

  16. Cleaning verification: A five parameter study of a Total Organic Carbon method development and validation for the cleaning assessment of residual detergents in manufacturing equipment.

    PubMed

    Li, Xue; Ahmad, Imad A Haidar; Tam, James; Wang, Yan; Dao, Gina; Blasko, Andrei

    2018-02-05

    A Total Organic Carbon (TOC) based analytical method to quantitate trace residues of clean-in-place (CIP) detergents CIP100 ® and CIP200 ® on the surfaces of pharmaceutical manufacturing equipment was developed and validated. Five factors affecting the development and validation of the method were identified: diluent composition, diluent volume, extraction method, location for TOC sample preparation, and oxidant flow rate. Key experimental parameters were optimized to minimize contamination and to improve the sensitivity, recovery, and reliability of the method. The optimized concentration of the phosphoric acid in the swabbing solution was 0.05M, and the optimal volume of the sample solution was 30mL. The swab extraction method was 1min sonication. The use of a clean room, as compared to an isolated lab environment, was not required for method validation. The method was demonstrated to be linear with a correlation coefficient (R) of 0.9999. The average recoveries from stainless steel surfaces at multiple spike levels were >90%. The repeatability and intermediate precision results were ≤5% across the 2.2-6.6ppm range (50-150% of the target maximum carry over, MACO, limit). The method was also shown to be sensitive with a detection limit (DL) of 38ppb and a quantitation limit (QL) of 114ppb. The method validation demonstrated that the developed method is suitable for its intended use. The methodology developed in this study is generally applicable to the cleaning verification of any organic detergents used for the cleaning of pharmaceutical manufacturing equipment made of electropolished stainless steel material. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Validation Study on a Rapid Method for Simultaneous Determination of Pesticide Residues in Vegetables and Fruits by LC-MS/MS.

    PubMed

    Sato, Tamaki; Miyamoto, Iori; Uemura, Masako; Nakatani, Tadashi; Kakutani, Naoya; Yamano, Tetsuo

    2016-01-01

    A validation study was carried out on a rapid method for the simultaneous determination of pesticide residues in vegetables and fruits by LC-MS/MS. Preparation of the test solution was performed by a solid-phase extraction technique with QuEChERS (STQ method). Pesticide residues were extracted with acetonitrile using a homogenizer, followed by salting-out and dehydration at the same time. The acetonitrile layer was purified with C18 and PSA mini-columns. The method was assessed for 130 pesticide residues in 14 kinds of vegetables and fruits at the concentration level of 0.01 μg/g according to the method validation guideline of the Ministry of Health, Labour and Welfare of Japan. As a result 75 to 120 pesticide residues were determined satisfactorily in the tested samples. Thus, this method could be useful for a rapid and simultaneous determination of multi-class pesticide residues in various vegetables and fruits.

  18. Fractal Clustering and Knowledge-driven Validation Assessment for Gene Expression Profiling.

    PubMed

    Wang, Lu-Yong; Balasubramanian, Ammaiappan; Chakraborty, Amit; Comaniciu, Dorin

    2005-01-01

    DNA microarray experiments generate a substantial amount of information about the global gene expression. Gene expression profiles can be represented as points in multi-dimensional space. It is essential to identify relevant groups of genes in biomedical research. Clustering is helpful in pattern recognition in gene expression profiles. A number of clustering techniques have been introduced. However, these traditional methods mainly utilize shape-based assumption or some distance metric to cluster the points in multi-dimension linear Euclidean space. Their results shows poor consistence with the functional annotation of genes in previous validation study. From a novel different perspective, we propose fractal clustering method to cluster genes using intrinsic (fractal) dimension from modern geometry. This method clusters points in such a way that points in the same clusters are more self-affine among themselves than to the points in other clusters. We assess this method using annotation-based validation assessment for gene clusters. It shows that this method is superior in identifying functional related gene groups than other traditional methods.

  19. Assessing the generalizability of randomized trial results to target populations.

    PubMed

    Stuart, Elizabeth A; Bradshaw, Catherine P; Leaf, Philip J

    2015-04-01

    Recent years have seen increasing interest in and attention to evidence-based practices, where the "evidence" generally comes from well-conducted randomized trials. However, while those trials yield accurate estimates of the effect of the intervention for the participants in the trial (known as "internal validity"), they do not always yield relevant information about the effects in a particular target population (known as "external validity"). This may be due to a lack of specification of a target population when designing the trial, difficulties recruiting a sample that is representative of a prespecified target population, or to interest in considering a target population somewhat different from the population directly targeted by the trial. This paper first provides an overview of existing design and analysis methods for assessing and enhancing the ability of a randomized trial to estimate treatment effects in a target population. It then provides a case study using one particular method, which weights the subjects in a randomized trial to match the population on a set of observed characteristics. The case study uses data from a randomized trial of school-wide positive behavioral interventions and supports (PBIS); our interest is in generalizing the results to the state of Maryland. In the case of PBIS, after weighting, estimated effects in the target population were similar to those observed in the randomized trial. The paper illustrates that statistical methods can be used to assess and enhance the external validity of randomized trials, making the results more applicable to policy and clinical questions. However, there are also many open research questions; future research should focus on questions of treatment effect heterogeneity and further developing these methods for enhancing external validity. Researchers should think carefully about the external validity of randomized trials and be cautious about extrapolating results to specific populations unless they are confident of the similarity between the trial sample and that target population.

  20. Developmental validation of a Nextera XT mitogenome Illumina MiSeq sequencing method for high-quality samples.

    PubMed

    Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla

    2018-05-01

    Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Development and Psychometric Evaluation of the HPV Clinical Trial Survey for Parents (CTSP‐HPV) Using Traditional Survey Development Methods and Community Engagement Principles

    PubMed Central

    Wallston, Kenneth A.; Wilkins, Consuelo H.; Hull, Pamela C.; Miller, Stephania T.

    2015-01-01

    Abstract Objective This study describes the development and psychometric evaluation of HPV Clinical Trial Survey for Parents with Children Aged 9 to 15 (CTSP‐HPV) using traditional instrument development methods and community engagement principles. Methods An expert panel and parental input informed survey content and parents recommended study design changes (e.g., flyer wording). A convenience sample of 256 parents completed the final survey measuring parental willingness to consent to HPV clinical trial (CT) participation and other factors hypothesized to influence willingness (e.g., HPV vaccine benefits). Cronbach's a, Spearman correlations, and multiple linear regression were used to estimate internal consistency, convergent and discriminant validity, and predictively validity, respectively. Results Internal reliability was confirmed for all scales (a ≥ 0.70.). Parental willingness was positively associated (p < 0.05) with trust in medical researchers, adolescent CT knowledge, HPV vaccine benefits, advantages of adolescent CTs (r range 0.33–0.42), supporting convergent validity. Moderate discriminant construct validity was also demonstrated. Regression results indicate reasonable predictive validity with the six scales accounting for 31% of the variance in parents’ willingness. Conclusions This instrument can inform interventions based on factors that influence parental willingness, which may lead to the eventual increase in trial participation. Further psychometric testing is warranted. PMID:26530324

  2. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Cross validation issues in multiobjective clustering

    PubMed Central

    Brusco, Michael J.; Steinley, Douglas

    2018-01-01

    The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857

  4. Internal Cluster Validation on Earthquake Data in the Province of Bengkulu

    NASA Astrophysics Data System (ADS)

    Rini, D. S.; Novianti, P.; Fransiska, H.

    2018-04-01

    K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.

  5. Validity in Qualitative Research: Application of Safeguards

    ERIC Educational Resources Information Center

    Daytner, Katrina M.

    2006-01-01

    The construct of validity has received considerable attention in qualitative methods literature (Denzin, 1989; Erickson, 1986; Geertz, 1973; Goetz & LeCompte, 1984; Howe & Eisenhart, 1990; Maxwell, 1992; Smith & Glass, 1987). Much of the attention has been focused upon the issue of whether qualitative results and interpretations accurately reflect…

  6. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  7. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  8. A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium

    PubMed Central

    Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.

    2014-01-01

    Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042

  9. Generalizing disease management program results: how to get from here to there.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-07-01

    For a disease management (DM) program, the ability to generalize results from the intervention group to the population, to other populations, or to other diseases is as important as demonstrating internal validity. This article provides an overview of the threats to external validity of DM programs, and offers methods to improve the capability for generalizing results obtained through the program. The external validity of DM programs must be evaluated even before program selection and implementation are begun with a prospective new client. Any fundamental differences in characteristics between individuals in an established DM program and in a new population/environment may limit the ability to generalize.

  10. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    PubMed Central

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282

  11. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  12. Assessment and risk classification protocol for patients in emergency units1

    PubMed Central

    Silva, Michele de Freitas Neves; Oliveira, Gabriela Novelli; Pergola-Marconato, Aline Maino; Marconato, Rafael Silva; Bargas, Eliete Boaventura; Araujo, Izilda Esmenia Muglia

    2014-01-01

    Objective to develop, validate the contents and verify the reliability of a risk classification protocol for an Emergency Unit. Method the content validation was developed in a University Hospital in a country town located in the state of Sao Paulo and was carried out in two stages: the first with the individual assessment of specialists and the second with the meeting between the researchers and the specialists. The use of the protocol followed a specific guide. Concerning reliability, the concordance or equivalent method among observers was used. Results the protocol developed showed to have content validity and, after the suggested changes were made, there were excellent results concerning reliability. Conclusion the assistance flow chart was shown to be easy to use, and facilitate the search for the complaint in each assistance priority. PMID:26107828

  13. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia

    2018-05-01

    The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.

  14. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol

    PubMed Central

    Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-01-01

    Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. PMID:28827239

  15. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  16. Optimization and validation of a method using UHPLC-fluorescence for the analysis of polycyclic aromatic hydrocarbons in cold-pressed vegetable oils.

    PubMed

    Silva, Simone Alves da; Sampaio, Geni Rodrigues; Torres, Elizabeth Aparecida Ferraz da Silva

    2017-04-15

    Among the different food categories, the oils and fats are important sources of exposure to polycyclic aromatic hydrocarbons (PAHs), a group of organic chemical contaminants. The use of a validated method is essential to obtain reliable analytical results since the legislation establishes maximum limits in different foods. The objective of this study was to optimize and validate a method for the quantification of four PAHs [benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(a)pyrene] in vegetable oils. The samples were submitted to liquid-liquid extraction, followed by solid-phase extraction, and analyzed by ultra-high performance liquid chromatography. Under the optimized conditions, the validation parameters were evaluated according to the INMETRO Guidelines: linearity (r2 >0.99), selectivity (no matrix interference), limits of detection (0.08-0.30μgkg -1 ) and quantification (0.25-1.00μgkg -1 ), recovery (80.13-100.04%), repeatability and intermediate precision (<10% RSD). The method was found to be adequate for routine analysis of PAHs in the vegetable oils evaluated. Copyright © 2016. Published by Elsevier Ltd.

  17. Does classroom-based Crew Resource Management training improve patient safety culture? A systematic review

    PubMed Central

    de Bruijne, Martine C; Zwijnenberg, Nicolien C; Jansma, Elise P; van Dyck, Cathy; Wagner, Cordula

    2014-01-01

    Aim: To evaluate the evidence of the effectiveness of classroom-based Crew Resource Management training on safety culture by a systematic review of literature. Methods: Studies were identified in PubMed, Cochrane Library, PsycINFO, and Educational Resources Information Center up to 19 December 2012. The Methods Guide for Comparative Effectiveness Reviews was used to assess the risk of bias in the individual studies. Results: In total, 22 manuscripts were included for review. Training settings, study designs, and evaluation methods varied widely. Most studies reporting only a selection of culture dimensions found mainly positive results, whereas studies reporting all safety culture dimensions of the particular survey found mixed results. On average, studies were at moderate risk of bias. Conclusion: Evidence of the effectiveness of Crew Resource Management training in health care on safety culture is scarce and the validity of most studies is limited. The results underline the necessity of more valid study designs, preferably using triangulation methods. PMID:26770720

  18. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  19. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard

    PubMed Central

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong

    2014-01-01

    Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817

  20. Validation of the Combined Comorbidity Index of Charlson and Elixhauser to Predict 30-Day Mortality Across ICD-9 and ICD-10.

    PubMed

    Simard, Marc; Sirois, Caroline; Candas, Bernard

    2018-05-01

    To validate and compare performance of an International Classification of Diseases, tenth revision (ICD-10) version of a combined comorbidity index merging conditions of Charlson and Elixhauser measures against individual measures in the prediction of 30-day mortality. To select a weight derivation method providing optimal performance across ICD-9 and ICD-10 coding systems. Using 2 adult population-based cohorts of patients with hospital admissions in ICD-9 (2005, n=337,367) and ICD-10 (2011, n=348,820), we validated a combined comorbidity index by predicting 30-day mortality with logistic regression. To appreciate performance of the Combined index and both individual measures, factors impacting indices performance such as population characteristics and weight derivation methods were accounted for. We applied 3 scoring methods (Van Walraven, Schneeweiss, and Charlson) and determined which provides best predictive values. Combined index [c-statistics: 0.853 (95% confidence interval: CI, 0.848-0.856)] performed better than original Charlson [0.841 (95% CI, 0.835-0.844)] or Elixhauser [0.841 (95% CI, 0.837-0.844)] measures on ICD-10 cohort. All weight derivation methods provided close high discrimination results for the Combined index (Van Walraven: 0.852, Schneeweiss: 0.851, Charlson: 0.849). Results were consistent across both coding systems. The Combined index remains valid with both ICD-9 and ICD-10 coding systems and the 3 weight derivation methods evaluated provided consistent high performance across those coding systems.

  1. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  2. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  3. Quantification of Rifaximin in Tablets by Spectrophotometric Method Ecofriendly in Ultraviolet Region

    PubMed Central

    2016-01-01

    Rifaximin is an oral nonabsorbable antibiotic that acts locally in the gastrointestinal tract with minimal systemic adverse effects. It does not have spectrophotometric method ecofriendly in the ultraviolet region described in official compendiums and literature. The analytical techniques for determination of rifaximin reported in the literature require large amount of time to release results and are significantly onerous. Furthermore, they use toxic reagents both for the operator and environment and, therefore, cannot be considered environmentally friendly analytical techniques. The objective of this study was to develop and validate an ecofriendly spectrophotometric method in the ultraviolet region to quantify rifaximin in tablets. The method was validated, showing linearity, selectivity, precision, accuracy, and robustness. It was linear over the concentration range of 10–30 mg L−1 with correlation coefficients greater than 0.9999 and limits of detection and quantification of 1.39 and 4.22 mg L−1, respectively. The validated method is useful and applied for the routine quality control of rifaximin, since it is simple with inexpensive conditions and fast in the release of results, optimizes analysts and equipment, and uses environmentally friendly solvents, being considered a green method, which does not prejudice either the operator or the environment. PMID:27429835

  4. Detrended fluctuation analysis for major depressive disorder.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Amin, Hafeezullah

    2015-01-01

    Clinical utility of Electroencephalography (EEG) based diagnostic studies is less clear for major depressive disorder (MDD). In this paper, a novel machine learning (ML) scheme was presented to discriminate the MDD patients and healthy controls. The proposed method inherently involved feature extraction, selection, classification and validation. The EEG data acquisition involved eyes closed (EC) and eyes open (EO) conditions. At feature extraction stage, the de-trended fluctuation analysis (DFA) was performed, based on the EEG data, to achieve scaling exponents. The DFA was performed to analyzes the presence or absence of long-range temporal correlations (LRTC) in the recorded EEG data. The scaling exponents were used as input features to our proposed system. At feature selection stage, 3 different techniques were used for comparison purposes. Logistic regression (LR) classifier was employed. The method was validated by a 10-fold cross-validation. As results, we have observed that the effect of 3 different reference montages on the computed features. The proposed method employed 3 different types of feature selection techniques for comparison purposes as well. The results show that the DFA analysis performed better in LE data compared with the IR and AR data. In addition, during Wilcoxon ranking, the AR performed better than LE and IR. Based on the results, it was concluded that the DFA provided useful information to discriminate the MDD patients and with further validation can be employed in clinics for diagnosis of MDD.

  5. Designing, optimization and validation of tetra-primer ARMS PCR protocol for genotyping mutations in caprine Fec genes

    PubMed Central

    Ahlawat, Sonika; Sharma, Rekha; Maitra, A.; Roy, Manoranjan; Tantia, M.S.

    2014-01-01

    New, quick, and inexpensive methods for genotyping novel caprine Fec gene polymorphisms through tetra-primer ARMS PCR were developed in the present investigation. Single nucleotide polymorphism (SNP) genotyping needs to be attempted to establish association between the identified mutations and traits of economic importance. In the current study, we have successfully genotyped three new SNPs identified in caprine fecundity genes viz. T(-242)C (BMPR1B), G1189A (GDF9) and G735A (BMP15). Tetra-primer ARMS PCR protocol was optimized and validated for these SNPs with short turn-around time and costs. The optimized techniques were tested on 158 random samples of Black Bengal goat breed. Samples with known genotypes for the described genes, previously tested in duplicate using the sequencing methods, were employed for validation of the assay. Upon validation, complete concordance was observed between the tetra-primer ARMS PCR assays and the sequencing results. These results highlight the ability of tetra-primer ARMS PCR in genotyping of mutations in Fec genes. Any associated SNP could be used to accelerate the improvement of goat reproductive traits by identifying high prolific animals at an early stage of life. Our results provide direct evidence that tetra-primer ARMS-PCR is a rapid, reliable, and cost-effective method for SNP genotyping of mutations in caprine Fec genes. PMID:25606428

  6. Ink dating using thermal desorption and gas chromatography/mass spectrometry: comparison of results obtained in two laboratories.

    PubMed

    Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline

    2015-01-01

    An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.

  7. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  8. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural Network.

    PubMed

    Gharehbaghi, Arash; Linden, Maria

    2017-10-12

    This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application. A novel validation method is also suggested for evaluating the structural risk, both in a quantitative and a qualitative manner. The effect of the DTGNN on the performance of the classifier is statistically validated through the repeated random subsampling using different sets of CTS, from different medical applications. The validation involves four medical databases, comprised of 108 recordings of the electroencephalogram signal, 90 recordings of the electromyogram signal, 130 recordings of the heart sound signal, and 50 recordings of the respiratory sound signal. Results of the statistical validations show that the DTGNN significantly improves the performance of the classification and also exhibits an optimal structural risk.

  9. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  10. Assessing Primary Representational System (PRS) Preference for Neurolinguistic Programming (NLP) Using Three Methods.

    ERIC Educational Resources Information Center

    Dorn, Fred J.

    1983-01-01

    Considered three methods of identifying Primary Representational System (PRS)--an interview, a word list, and a self-report--in a study of 120 college students. Results suggested the three methods offer little to counselors either collectively or individually. Results did not validate the PRS construct, suggesting the need for further research.…

  11. Extension and Validation of a Hybrid Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 2

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Shivarama, Ravishankar

    2004-01-01

    The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.

  12. Gold-standard evaluation of a folksonomy-based ontology learning model

    NASA Astrophysics Data System (ADS)

    Djuana, E.

    2018-03-01

    Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.

  13. Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.

    The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less

  14. Validation of the ULCEAT methodology by applying it in retrospect to the Roboticbed.

    PubMed

    Nakamura, Mio; Suzurikawa, Jun; Tsukada, Shohei; Kume, Yohei; Kawakami, Hideo; Inoue, Kaoru; Inoue, Takenobu

    2015-01-01

    In answer to the increasing demand for care by the Japanese oldest portion of the population, an extensive programme of life support robots is under development, advocated by the Japanese government. Roboticbed® (RB) is developed to facilitate patients in their daily life in making independent transfers from and to the bed. The bed is intended both for elderly and persons with a disability. The purpose of this study is to examine the validity of the user and user's life centred clinical evaluation of assistive technology (ULCEAT) methodology. To support user centred development of life support robots the ULCEAT method was developed. By means of the ULCEAT method the target users and the use environment were re-established in an earlier study. The validity of the method is tested by re-evaluating the development of RB in retrospect. Six participants used the first prototype of RB (RB1) and eight participants used the second prototype of RB (RB2). The results indicated that the functionality was improved owing to the end-user evaluations. Therefore, we confirmed the content validity of the proposed ULCEAT method. In this study we confirmed the validation of the ULCEAT methodology by applying it in retrospect to RB using development process. This method will be used for the development of Life-support robots and prototype assistive technologies.

  15. Reliability and Validity of 10 Different Standard Setting Procedures.

    ERIC Educational Resources Information Center

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  16. The Predictive Validity of Dynamic Assessment: A Review

    ERIC Educational Resources Information Center

    Caffrey, Erin; Fuchs, Douglas; Fuchs, Lynn S.

    2008-01-01

    The authors report on a mixed-methods review of 24 studies that explores the predictive validity of dynamic assessment (DA). For 15 of the studies, they conducted quantitative analyses using Pearson's correlation coefficients. They descriptively examined the remaining studies to determine if their results were consistent with findings from the…

  17. 3D inversion of full gravity gradient tensor data in spherical coordinate system using local north-oriented frame

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Wu, Yulong; Yan, Jianguo; Wang, Haoran; Rodriguez, J. Alexis P.; Qiu, Yue

    2018-04-01

    In this paper, we propose an inverse method for full gravity gradient tensor data in the spherical coordinate system. As opposed to the traditional gravity inversion in the Cartesian coordinate system, our proposed method takes the curvature of the Earth, the Moon, or other planets into account, using tesseroid bodies to produce gravity gradient effects in forward modeling. We used both synthetic and observed datasets to test the stability and validity of the proposed method. Our results using synthetic gravity data show that our new method predicts the depth of the density anomalous body efficiently and accurately. Using observed gravity data for the Mare Smythii area on the moon, the density distribution of the crust in this area reveals its geological structure. These results validate the proposed method and potential application for large area data inversion of planetary geological structures.[Figure not available: see fulltext.

  18. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  19. A rapid and sensitive LC-MS/MS method for the determination of Pulsatilla saponin D in rat plasma and its application in a rat pharmacokinetic and bioavailability study.

    PubMed

    Ouyang, Hui; Guo, Yicheng; He, Mingzhen; Zhang, Jinlian; Huang, Xiaofang; Zhou, Xin; Jiang, Hongliang; Feng, Yulin; Yang, Shilin

    2015-03-01

    A simple, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for the determination of Pulsatilla saponin D, a potential antitumor constituent isolated from Pulsatilla chinensis in rat plasma. Rat plasma samples were pretreated by protein precipitation with methanol. The method validation was performed in accordance with US Food and Drug Administration guidelines and the results met the acceptance criteria. The method was successfully applied to assess the pharmacokinetics and oral bioavailability of Pulsatilla saponin D in rats. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Study on evaluation methods for Rayleigh wave dispersion characteristic

    USGS Publications Warehouse

    Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.

    2005-01-01

    The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.

  1. Fall Risk Assessment Through Automatic Combination of Clinical Fall Risk Factors and Body-Worn Sensor Data.

    PubMed

    Greene, Barry R; Redmond, Stephen J; Caulfield, Brian

    2017-05-01

    Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital costs, due to fewer admissions and reduced injuries due to falling.

  2. Alternative methods to evaluate trial level surrogacy.

    PubMed

    Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert

    2008-01-01

    The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.

  3. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  5. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  6. Validation of the SCEC broadband platform V14.3 simulation methods using pseudo spectral acceleration data

    USGS Publications Warehouse

    Dreger, Douglas S.; Beroza, Gregory C.; Day, Steven M.; Goulet, Christine A.; Jordan, Thomas H; Spudich, Paul A.; Stewart, Jonathan P.

    2015-01-01

    This paper summarizes the evaluation of ground motion simulation methods implemented on the SCEC Broadband Platform (BBP), version 14.3 (as of March 2014). A seven-member panel, the authorship of this article, was formed to evaluate those methods for the prediction of pseudo-­‐spectral accelerations (PSAs) of ground motion. The panel’s mandate was to evaluate the methods using tools developed through the validation exercise (Goulet et al. ,2014), and to define validation metrics for the assessment of the methods’ performance. This paper summarizes the evaluation process and conclusions from the panel. The five broadband, finite-source simulation methods on the BBP include two deterministic approaches herein referred to as CSM (Anderson, 2014) and UCSB (Crempien and Archuleta, 2014); a band-­‐limited stochastic white noise method called EXSIM (Atkinson and Assatourians, 2014); and two hybrid approaches, referred to as G&P (Graves and Pitarka, 2014) and SDSU (Olsen and Takedatsu, 2014), which utilize a deterministic Green’s function approach for periods longer than 1 second and stochastic methods for periods shorter than 1 second. Two acceptance tests were defined to validate the broadband finite‐source ground methods (Goulet et al., 2014). Part A compared observed and simulated PSAs for periods from 0.01 to 10 seconds for 12 moderate to large earthquakes located in California, Japan, and the eastern US. Part B compared the median simulated PSAs to published NGA-­‐West1 (Abrahamson and Silva, 2008; Boore and Atkinson, 2008; Campbell and Bozorgnia, 2008; and Chiou and Youngs, 2008) ground motion prediction equations (GMPEs) for specific magnitude and distance cases using a pass-­‐fail criteria based on a defined acceptable range around the spectral shape of the GMPEs. For the initial Part A and Part B validation exercises during the summer of 2013, the software for the five methods was locked in at version 13.6 (see Maechling et al., 2014). In the spring of 2014, additional moderate events were considered for the Part A validation, and additional magnitude and distance cases were considered for the Part B validation, for the software locked in at version 14.3. Several of the simulation procedures, specifically UCSB and SDSU, changed significantly between versions 13.6 and 14.3. The CSM code was not submitted in time for the v14.3 evaluation and its detailed performance is not addressed in this paper. As described in Goulet et al. (2014) and Maechling et al. (2014), the BBP generates a variety of products, including three-­‐component acceleration time series. A series of post-­‐processing codes were developed to provide individual component PSAs and average median horizontal-­‐component PSA (referred to as RotD50; Boore, 2010) for oscillator periods ranging from 0.01 to 10 seconds, as well as median PSA values computed using the NGA-­‐West 1 GMPEs. The BBP was also configured to provide statistical analysis of simulation results relative to recordings (Part A) and GMPEs (Part B) as described further in sections below. As part of our evaluation, we reviewed documentation provided by each of the developers, which included the technical basis behind the methods and the developer’s self-­‐assessments regarding the extrapolation capabilities (in terms of magnitude and distance ranges) of their methods. Two workshops were held in which methods and results were presented, and the panel was given the opportunity to question the developers and to have detailed technical discussions. A SCEC report (Dreger et al., 2013) describes the results of this review for BBP version 13.6. This paper summarizes that work and presents results for the more recent BBP 14.3 validation.

  7. 78 FR 56718 - Draft Guidance for Industry on Bioanalytical Method Validation; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ...] Draft Guidance for Industry on Bioanalytical Method Validation; Availability AGENCY: Food and Drug... availability of a draft guidance for industry entitled ``Bioanalytical Method Validation.'' The draft guidance is intended to provide recommendations regarding analytical method development and validation for the...

  8. Validity of self-reported mechanical demands for occupational epidemiologic research of musculoskeletal disorders

    PubMed Central

    Barrero, Lope H; Katz, Jeffrey N; Dennerlein, Jack T

    2012-01-01

    Objectives To describe the relation of the measured validity of self-reported mechanical demands (self-reports) with the quality of validity assessments and the variability of the assessed exposure in the study population. Methods We searched for original articles, published between 1990 and 2008, reporting the validity of self-reports in three major databases: EBSCOhost, Web of Science, and PubMed. Identified assessments were classified by methodological characteristics (eg, type of self-report and reference method) and exposure dimension was measured. We also classified assessments by the degree of comparability between the self-report and the employed reference method, and the variability of the assessed exposure in the study population. Finally, we examined the association of the published validity (r) with this degree of comparability, as well as with the variability of the exposure variable in the study population. Results Of the 490 assessments identified, 75% used observation-based reference measures and 55% tested self-reports of posture duration and movement frequency. Frequently, validity studies did not report demographic information (eg, education, age, and gender distribution). Among assessments reporting correlations as a measure of validity, studies with a better match between the self-report and the reference method, and studies conducted in more heterogeneous populations tended to report higher correlations [odds ratio (OR) 2.03, 95% confidence interval (95% CI) 0.89–4.65 and OR 1.60, 95% CI 0.96–2.61, respectively]. Conclusions The reported data support the hypothesis that validity depends on study-specific factors often not examined. Experimentally manipulating the testing setting could lead to a better understanding of the capabilities and limitations of self-reported information. PMID:19562235

  9. Development and validation of a method for mercury determination in seawater for the process control of a candidate certified reference material.

    PubMed

    Sánchez, Raquel; Snell, James; Held, Andrea; Emons, Hendrik

    2015-08-01

    A simple, robust and reliable method for mercury determination in seawater matrices based on the combination of cold vapour generation and inductively coupled plasma mass spectrometry (CV-ICP-MS) and its complete in-house validation are described. The method validation covers parameters such as linearity, limit of detection (LOD), limit of quantification (LOQ), trueness, repeatability, intermediate precision and robustness. A calibration curve covering the whole working range was achieved with coefficients of determination typically higher than 0.9992. The repeatability of the method (RSDrep) was 0.5 %, and the intermediate precision was 2.3 % at the target mass fraction of 20 ng/kg. Moreover, the method was robust with respect to the salinity of the seawater. The limit of quantification was 2.7 ng/kg, which corresponds to 13.5 % of the target mass fraction in the future certified reference material (20 ng/kg). An uncertainty budget for the measurement of mercury in seawater has been established. The relative expanded (k = 2) combined uncertainty is 6 %. The performance of the validated method was demonstrated by generating results for process control and a homogeneity study for the production of a candidate certified reference material.

  10. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  11. Validation of quantitative and qualitative methods for detecting allergenic ingredients in processed foods in Japan.

    PubMed

    Sakai, Shinobu; Adachi, Reiko; Akiyama, Hiroshi; Teshima, Reiko

    2013-06-19

    A labeling system for food allergenic ingredients was established in Japan in April 2002. To monitor the labeling, the Japanese government announced official methods for detecting allergens in processed foods in November 2002. The official methods consist of quantitative screening tests using enzyme-linked immunosorbent assays (ELISAs) and qualitative confirmation tests using Western blotting or polymerase chain reactions (PCR). In addition, the Japanese government designated 10 μg protein/g food (the corresponding allergenic ingredient soluble protein weight/food weight), determined by ELISA, as the labeling threshold. To standardize the official methods, the criteria for the validation protocol were described in the official guidelines. This paper, which was presented at the Advances in Food Allergen Detection Symposium, ACS National Meeting and Expo, San Diego, CA, Spring 2012, describes the validation protocol outlined in the official Japanese guidelines, the results of interlaboratory studies for the quantitative detection method (ELISA for crustacean proteins) and the qualitative detection method (PCR for shrimp and crab DNAs), and the reliability of the detection methods.

  12. DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-01-01

    Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling

  13. Ultrasound-Assisted Extraction of Stilbenes from Grape Canes.

    PubMed

    Piñeiro, Zulema; Marrufo-Curtido, Almudena; Serrano, Maria Jose; Palma, Miguel

    2016-06-16

    An analytical ultrasound-assisted extraction (UAE) method has been optimized and validated for the rapid extraction of stilbenes from grape canes. The influence of sample pre-treatment (oven or freeze-drying) and several extraction variables (solvent, sample-solvent ratio and extraction time between others) on the extraction process were analyzed. The new method allowed the main stilbenes in grape canes to be extracted in just 10 min, with an extraction temperature of 75 °C and 60% ethanol in water as the extraction solvent. Validation of the extraction method was based on analytical properties. The resulting RSDs (n = 5) for interday/intraday precision were less than 10%. Furthermore, the method was successfully applied in the analysis of 20 different grape cane samples. The result showed that grape cane byproducts are potentially sources of bioactive compounds of interest for pharmaceutical and food industries.

  14. Method of Analysis by the U.S. Geological Survey California District Sacramento Laboratory?Determination of Trihalomethane Formation Potential, Method Validation, and Quality-Control Practices

    USGS Publications Warehouse

    Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel

    2004-01-01

    An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.

  15. Validity of Willingness to Pay Measures under Preference Uncertainty.

    PubMed

    Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich

    2016-01-01

    Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.

  16. Multimethod Investigation of Interpersonal Functioning in Borderline Personality Disorder

    PubMed Central

    Stepp, Stephanie D.; Hallquist, Michael N.; Morse, Jennifer Q.; Pilkonis, Paul A.

    2011-01-01

    Even though interpersonal functioning is of great clinical importance for patients with borderline personality disorder (BPD), the comparative validity of different assessment methods for interpersonal dysfunction has not yet been tested. This study examined multiple methods of assessing interpersonal functioning, including self- and other-reports, clinical ratings, electronic diaries, and social cognitions in three groups of psychiatric patients (N=138): patients with (1) BPD, (2) another personality disorder, and (3) Axis I psychopathology only. Using dominance analysis, we examined the predictive validity of each method in detecting changes in symptom distress and social functioning six months later. Across multiple methods, the BPD group often reported higher interpersonal dysfunction scores compared to other groups. Predictive validity results demonstrated that self-report and electronic diary ratings were the most important predictors of distress and social functioning. Our findings suggest that self-report scores and electronic diary ratings have high clinical utility, as these methods appear most sensitive to change. PMID:21808661

  17. Optimization and validation of a minicolumn method for determining aflatoxins in copra meal.

    PubMed

    Arim, R H; Aguinaldo, A R; Tanaka, T; Yoshizawa, T

    1999-01-01

    A minicolumn (MC) method for determining aflatoxins in copra meal was optimized and validated. The method uses methanol-4% KCl solution as extractant and CuSO4 solution as clarifying agent. The chloroform extract is applied to an MC that incorporates "lahar," an indigenous material, as substitute for silica gel. The "lahar"-containing MC produces a more distinct and intense blue fluoresence on the Florisil layer than an earlier MC. The method has a detection limit of 15 micrograms total aflatoxins/kg sample. Confirmatory tests using 50% H2SO4 and trifluoroacetic acid in benzene with 25% HNO3 showed that copra meal samples contained aflatoxins and no interfering agents. The MC responses of the copra meal samples were in good agreement with their behavior in thin-layer chromatography. This modified MC method is accurate, giving linearity-valid results; rapid, being done in 15 min; economical, using low-volume reagents; relatively safe, having low-exposure risk of analysts to chemicals; and simple, making its field application feasible.

  18. Validity of Willingness to Pay Measures under Preference Uncertainty

    PubMed Central

    Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich

    2016-01-01

    Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method. PMID:27096163

  19. Development and Validation of an Extractive Spectrophotometric Method for Miconazole Nitrate Assay in Pharmaceutical Formulations.

    PubMed

    Eticha, Tadele; Kahsay, Getu; Hailu, Teklebrhan; Gebretsadikan, Tesfamichael; Asefa, Fitsum; Gebretsadik, Hailekiros; Thangabalan, Boovizhikannan

    2018-01-01

    A simple extractive spectrophotometric technique has been developed and validated for the determination of miconazole nitrate in pure and pharmaceutical formulations. The method is based on the formation of a chloroform-soluble ion-pair complex between the drug and bromocresol green (BCG) dye in an acidic medium. The complex showed absorption maxima at 422 nm, and the system obeys Beer's law in the concentration range of 1-30  µ g/mL with molar absorptivity of 2.285 × 10 4  L/mol/cm. The composition of the complex was studied by Job's method of continuous variation, and the results revealed that the mole ratio of drug : BCG is 1 : 1. Full factorial design was used to optimize the effect of variable factors, and the method was validated based on the ICH guidelines. The method was applied for the determination of miconazole nitrate in real samples.

  20. Validity, efficacy and reliability of 3 nutritional screening tools regarding the nutritional assessment in different social and health areas.

    PubMed

    Castro-Vega, Iciar; Veses Martín, Silvia; Cantero Llorca, Juana; Barrios Marta, Cristina; Bañuls, Celia; Hernández-Mijares, Antonio

    2018-03-09

    Nutritional screening allows for the detection of nutritional risk. Validated tools should be implemented, and their usefulness should be contrasted with a gold standard. The aim of this study is to discover the validity, efficacy and reliability of 3 nutritional screening tools in relation to complete nutritional assessment. A sub-analysis of a cross-sectional and descriptive study on the prevalence of disease-related malnutrition. The sample was selected from outpatients, hospitalized and institutionalized patients. MUST, MNAsf and MST screening were employed. A nutritional assessment of all the patients was undertaken. The SENPE-SEDOM consensus was used for the diagnosis. In the outpatients, both MUST and MNAsf have a similar validity in relation to the nutritional assessment (AUC 0.871 and 0.883, respectively). In the institutionalized patients, the MUST screening method is the one that shows the greatest validity (AUC 0.815), whereas in the hospitalized patients, the most valid methods are both MUST and MST (AUC 0.868 and 0.853, respectively). It is essential to use nutritional screening to invest the available resources wisely. Based on our results, MUST is the most suitable screening method in hospitalized and institutionalized patients. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  1. Validation of a Russian Language Oswestry Disability Index Questionnaire.

    PubMed

    Yu, Elizabeth M; Nosova, Emily V; Falkenstein, Yuri; Prasad, Priya; Leasure, Jeremi M; Kondrashov, Dimitriy G

    2016-11-01

    Study Design  Retrospective reliability and validity study. Objective  To validate a recently translated Russian language version of the Oswestry Disability Index (R-ODI) using standardized methods detailed from previous validations in other languages. Methods  We included all subjects who were seen in our spine surgery clinic, over the age of 18, and fluent in the Russian language. R-ODI was translated by six bilingual people and combined into a consensus version. R-ODI and visual analog scale (VAS) questionnaires for leg and back pain were distributed to subjects during both their initial and follow-up visits. Test validity, stability, and internal consistency were measured using standardized psychometric methods. Results Ninety-seven subjects participated in the study. No change in the meaning of the questions on R-ODI was noted with translation from English to Russian. There was a significant positive correlation between R-ODI and VAS scores for both the leg and back during both the initial and follow-up visits ( p  < 0.01 for all). The instrument was shown to have high internal consistency (Cronbach α = 0.82) and moderate test-retest stability (interclass correlation coefficient = 0.70). Conclusions  The R-ODI is both valid and reliable for use among the Russian-speaking population in the United States.

  2. State of the art in the validation of screening methods for the control of antibiotic residues: is there a need for further development?

    PubMed

    Gaudin, Valérie

    2017-09-01

    Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.

  3. Validation of Gujarati Version of ABILOCO-Kids Questionnaire

    PubMed Central

    Diwan, Jasmin; Patel, Pankaj; Bansal, Ankita B.

    2015-01-01

    Background ABILOCO-Kids is a measure of locomotion ability for children with cerebral palsy (CP) aged 6 to 15 years & is available in English & French. Aim To validate the Gujarati version of ABILOCO-Kids questionnaire to be used in clinical research on Gujarati population. Materials and Methods ABILOCO-Kids questionnaire was translated into Gujarati from English using forward-backward-forward method. To ensure face & content validity of Gujarati version using group consensus method, each item was examined by group of experts having mean experience of 24.62 years in field of paediatric and paediatric physiotherapy. Each item was analysed for content, meaning, wording, format, ease of administration & scoring. Each item was scored by expert group as either accepted, rejected or accepted with modification. Procedure was continued until 80% of consensus for all items. Concurrent validity was examined on 55 children with Cerebral Palsy (6-15 years) of all Gross Motor Functional Classification System (GMFCS) level & all clinical types by correlating score of ABILOCO-Kids with Gross Motor Functional Measure & GMFCS. Result In phase 1 of validation, 16 items were accepted as it is; 22 items accepted with modification & 3 items went for phase 2 validation. For concurrent validity, highly significant positive correlation was found between score of ABILOCO-Kids & total GMFM (r=0.713, p<0.005) & highly significant negative correlation with GMFCS (r= -0.778, p<0.005). Conclusion Gujarati translated version of ABILOCO-Kids questionnaire has good face & content validity as well as concurrent validity which can be used to measure caregiver reported locomotion ability in children with CP. PMID:26557603

  4. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  5. Reliability and validity assessment of gastrointestinal dystemperaments questionnaire: a novel scale in Persian traditional medicine

    PubMed Central

    Hoseinzadeh, Hamidreza; Taghipour, Ali; Yousefi, Mahdi

    2018-01-01

    Background Development of a questionnaire based on the resources of Persian traditional medicine seems necessary. One of the problems faced by practitioners of traditional medicine is the different opinions regarding the diagnosis of general temperament or temperament of member. One of the reasons is the lack of validity tools, and it has led to difficulties in training the student of traditional medicine and the treatment of patients. The differences in the detection methods, have given rise to several treatment methods. Objective The present study aimed to develop a questionnaire and standard software for diagnosis of gastrointestinal dystemperaments. Methods The present research is a tool developing study which included 8 stages of developing the items, determining the statements based on items, assessing the face validity, assessing the content validity, assessing the reliability, rating the items, developing a software for calculation of the total score of the questionnaire named GDS v.1.1, and evaluating the concurrent validity using statistical tests including Cronbach’s alpha coefficient, Cohen’s kappa coefficient. Results Based on the results, 112 notes including 62 symptoms were extracted from resources, and 58 items were obtained from in-person interview sessions with a panel of experts. A statement was selected for each item and, after merging a number of statements, a total of 49 statements were finally obtained. By calculating the score of statement impact and determining the content validity, respectively, 6 and 10 other items were removed from the list of statements. Standardized Cronbach’s alpha for this questionnaire was obtained 0.795 and its concurrent validity was equal to 0.8. Conclusion A quantitative tool was developed for diagnosis and examination of gastrointestinal dystemperaments. The developed questionnaire is adequately reliable and valid for this purpose. In addition, the software can be used for clinical diagnosis. PMID:29629060

  6. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1314 and Method 1315

    EPA Science Inventory

    This report summarizes the results of an interlaboratory study conducted to generate precision estimates for two leaching methods under review by the U.S. EPA’s OSWER for inclusion into the EPA’s SW-846: Method 1314: Liquid-Solid Partitioning as a Function of Liquid...

  7. Member Checking: A Tool to Enhance Trustworthiness or Merely a Nod to Validation?

    PubMed

    Birt, Linda; Scott, Suzanne; Cavers, Debbie; Campbell, Christine; Walter, Fiona

    2016-06-22

    The trustworthiness of results is the bedrock of high quality qualitative research. Member checking, also known as participant or respondent validation, is a technique for exploring the credibility of results. Data or results are returned to participants to check for accuracy and resonance with their experiences. Member checking is often mentioned as one in a list of validation techniques. This simplistic reporting might not acknowledge the value of using the method, nor its juxtaposition with the interpretative stance of qualitative research. In this commentary, we critique how member checking has been used in published research, before describing and evaluating an innovative in-depth member checking technique, Synthesized Member Checking. The method was used in a study with patients diagnosed with melanoma. Synthesized Member Checking addresses the co-constructed nature of knowledge by providing participants with the opportunity to engage with, and add to, interview and interpreted data, several months after their semi-structured interview. © The Author(s) 2016.

  8. Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule

    NASA Astrophysics Data System (ADS)

    Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong

    2018-06-01

    To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.

  9. Benchmark tests for a Formula SAE Student car prototyping

    NASA Astrophysics Data System (ADS)

    Mariasiu, Florin

    2011-12-01

    Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.

  10. What do Demand-Control and Effort-Reward work stress questionnaires really measure? A discriminant content validity study of relevance and representativeness of measures.

    PubMed

    Bell, Cheryl; Johnston, Derek; Allan, Julia; Pollard, Beth; Johnston, Marie

    2017-05-01

    The Demand-Control (DC) and Effort-Reward Imbalance (ERI) models predict health in a work context. Self-report measures of the four key constructs (demand, control, effort, and reward) have been developed and it is important that these measures have good content validity uncontaminated by content from other constructs. We assessed relevance (whether items reflect the constructs) and representativeness (whether all aspects of the construct are assessed, and all items contribute to that assessment) across the instruments and items. Two studies examined fourteen demand/control items from the Job Content Questionnaire and seventeen effort/reward items from the Effort-Reward Imbalance measure using discriminant content validation and a third study developed new methods to assess instrument representativeness. Both methods use judges' ratings and construct definitions to get transparent quantitative estimates of construct validity. Study 1 used dictionary definitions while studies 2 and 3 used published phrases to define constructs. Overall, 3/5 demand items, 4/9 control items, 1/6 effort items, and 7/11 reward items were uniquely classified to the appropriate theoretical construct and were therefore 'pure' items with discriminant content validity (DCV). All pure items measured a defining phrase. However, both the DC and ERI assessment instruments failed to assess all defining aspects. Finding good discriminant content validity for demand and reward measures means these measures are usable and our quantitative results can guide item selection. By contrast, effort and control measures had limitations (in relevance and representativeness) presenting a challenge to the implementation of the theories. Statement of contribution What is already known on this subject? While the reliability and construct validity of Demand-Control and Effort-Reward-Imbalance (DC and ERI) work stress measures are routinely reported, there has not been adequate investigation of their content validity. This paper investigates their content validity in terms of both relevance and representativeness and provides a model for the investigation of content validity of measures in health psychology more generally. What does this study add? A new application of an existing method, discriminant content validity, and a new method of assessing instrument representativeness. 'Pure' DC and ERI items are identified, as are constructs that are not fully represented by their assessment instruments. The findings are important for studies attempting to distinguish between the main DC and ERI work stress constructs. The quantitative results can be used to guide item selection for future studies. © 2017 The British Psychological Society.

  11. Student mathematical imagination instruments: construction, cultural adaptation and validity

    NASA Astrophysics Data System (ADS)

    Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.

    2018-03-01

    Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.

  12. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1989-01-01

    The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.

  13. Determination of total arsenic in fish by hydride-generation atomic absorption spectrometry: method validation, traceability and uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Nugraha, W. C.; Elishian, C.; Ketrin, R.

    2017-03-01

    Fish containing arsenic compound is one of the important indicators of arsenic contamination in water monitoring. The high level of arsenic in fish is due to absorption through food chain and accumulated in their habitat. Hydride generation (HG) coupled with atomic absorption spectrometric (AAS) detection is one of the most popular techniques employed for arsenic determination in a variety of matrices including fish. This study aimed to develop a method for the determination of total arsenic in fish by HG-AAS. The method for sample preparation from American of Analytical Chemistry (AOAC) Method 999.10-2005 was adopted for acid digestion using microwave digestion system and AOAC Method 986.15 - 2005 for dry ashing. The method was developed and validated using Certified Reference Material DORM 3 Fish Protein for trace metals for ensuring the accuracy and the traceability of the results. The sources of uncertainty of the method were also evaluated. By using the method, it was found that the total arsenic concentration in the fish was 45.6 ± 1.22 mg.Kg-1 with a coverage factor of equal to 2 at 95% of confidence level. Evaluation of uncertainty was highly influenced by the calibration curve. This result was also traceable to International Standard System through analysis of Certified Reference Material DORM 3 with 97.5% of recovery. In summary, it showed that method of preparation and HG-AAS technique for total arsenic determination in fish were valid and reliable.

  14. Optimisation of an analytical method and results from the inter-laboratory comparison of the migration of regulated substances from food packaging into the new mandatory European Union simulant for dry foodstuffs.

    PubMed

    Jakubowska, Natalia; Beldì, Giorgia; Peychès Bach, Aurélie; Simoneau, Catherine

    2014-01-01

    This paper presents the outcome of the development, optimisation and validation at European Union level of an analytical method for using poly(2,6-diphenyl phenylene oxide--PPPO), which is stipulated in Regulation (EU) No. 10/2011, as food simulant E for testing specific migration from plastics into dry foodstuffs. Two methods for fortifying respectively PPPO and a low-density polyethylene (LDPE) film with surrogate substances that are relevant to food contact were developed. A protocol for cleaning the PPPO and an efficient analytical method were developed for the quantification of butylhydroxytoluene (BHT), benzophenone (BP), diisobutylphthalate (DiBP), bis(2-ethylhexyl) adipate (DEHA) and 1,2-cyclohexanedicarboxylic acid, diisononyl ester (DINCH) from PPPO. A protocol for a migration test from plastics using small migration cells was also developed. The method was validated by an inter-laboratory comparison (ILC) with 16 national reference laboratories for food contact materials in the European Union. This allowed for the first time data to be obtained on the precision and laboratory performance of both migration and quantification. The results showed that the validation ILC was successful even when taking into account the complexity of the exercise. The results showed that the method performance was 7-9% repeatability standard deviation (rSD) for most substances (regardless of concentration), with 12% rSD for the high level of BHT and for DiBP at very low levels. The reproducibility standard deviation results for the 16 European Union laboratories were in the range of 20-30% for the quantification from PPPO (for the three levels of concentrations of the five substances) and 15-40% from migration experiments from the fortified plastic at 60°C for 10 days and subsequent quantification. Considering the lack of data previously available in the literature, this work has demonstrated that the validation of a method is possible both for migration from a film and for quantification into a corresponding simulant for specific migration.

  15. Validity and Reliability of Perinatal Biomarkers after Storage as Dry Blood Spots on Paper

    PubMed Central

    Mihalopoulos, Nicole L.; Phillips, Terry M.; Slater, Hillarie; Thomson, J. Anne; Varner, Michael W.; Moyer-Mileur, Laurie J.

    2013-01-01

    Ojective To validate use of chip-based immunoaffinity capillary electrophoresis on dry blood spot samples (DBSS) to measure obesity-related cytokines. Methods Chip-based immunoaffinity capillary electrophoresis was used to measure adiponectin, leptin and insulin in serum and DBSS in pregnant women, cord blood, and infant heelstick at birth and 6 weeks. Concordance of measurements was determined with Pearson's correlation. Results We report high concordance between results obtained from serum and DBSS with the exception of cord blood specimens. Conclusions Ease of sample collection and storage makes DBSS an optimal method for use in studies involving neonates and young children. PMID:21735507

  16. System and method for forward error correction

    NASA Technical Reports Server (NTRS)

    Cole, Robert M. (Inventor); Bishop, James E. (Inventor)

    2006-01-01

    A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.

  17. System and method for transferring data on a data link

    NASA Technical Reports Server (NTRS)

    Cole, Robert M. (Inventor); Bishop, James E. (Inventor)

    2007-01-01

    A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.

  18. [Econometric and ethical validation of regression logistics. Reducing of the number of patients in the evaluation of mortality].

    PubMed

    Castiel, D; Herve, C

    1992-01-01

    In general, a large number of patients is needed to conclude whether the results of a therapeutic strategy are significant or not. One can lower this number with a logit. The method has been proposed in an article published recently (Cost-utility analysis of early thrombolytic therapy, Pharmaco Economics, 1992). The present article is an essay aimed at validating the method, both from the econometric and ethical points of view.

  19. Validating Savings Claims of Cold Climate Zero Energy Ready Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, J.; Puttagunta, S.

    This report details the validation methods used to analyze consumption at each of these homes. It includes a detailed end-use examination of consumptions from the following categories: 1) Heating, 2) Cooling, 3) Lights, Appliances, and Miscellaneous Electric Loads (LAMELS) along with Domestic Hot Water Use, 4) Ventilation, and 5) PV generation. A utility bill disaggregation method, which allows a crude estimation of space conditioning loads based on outdoor air temperature, was also performed and the results compared to the actual measured data.

  20. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  1. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries rated each of 15 intervention activity-target pairings. Based on quantitative indices, content validity was excellent for relevance and good for likely effectiveness and age-appropriateness. Two intervention activities had item-level indicators that suggested the need for further review and potential revision by the development team. This project demonstrated that assessment of content validity can be straightforward and feasible to implement and that results of this assessment provide useful information for ongoing development and iterations of new eHealth interventions, complementing other sources of information (eg, user feedback, effectiveness evaluations). This approach can be utilized at one or more points during the development process to guide ongoing optimization of eHealth interventions.

  2. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    PubMed

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  3. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  4. A meta-analysis of an implicit measure of personality functioning: the Mutuality of Autonomy Scale.

    PubMed

    Graceffo, Robert A; Mihura, Joni L; Meyer, Gregory J

    2014-01-01

    The Mutuality of Autonomy scale (MA) is a Rorschach variable designed to capture the degree to which individuals mentally represent self and other as mutually autonomous versus pathologically destructive (Urist, 1977). Discussions of the MA's validity found in articles and chapters usually claim good support, which we evaluated by a systematic review and meta-analysis of its construct validity. Overall, in a random effects analysis across 24 samples (N = 1,801) and 91 effect sizes, the MA scale was found to maintain a relationship of r =.20, 95% CI [.16,.25], with relevant validity criteria. We hypothesized that MA summary scores that aggregate more MA response-level data would maintain the strongest relationship with relevant validity criteria. Results supported this hypothesis (aggregated scoring method: r =.24, k = 57, S = 24; nonaggregated scoring methods: r =.15, k = 34, S = 10; p =.039, 2-tailed). Across 7 exploratory moderator analyses, only 1 (criterion method) produced significant results. Criteria derived from the Thematic Apperception Test produced smaller effects than clinician ratings, diagnostic differentiation, and self-attributed characteristics; criteria derived from observer reports produced smaller effects than clinician ratings and self-attributed characteristics. Implications of the study's findings are discussed in terms of both research and clinical work.

  5. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  6. Development and Validation of HPLC-DAD and UHPLC-DAD Methods for the Simultaneous Determination of Guanylhydrazone Derivatives Employing a Factorial Design.

    PubMed

    Azevedo de Brito, Wanessa; Gomes Dantas, Monique; Andrade Nogueira, Fernando Henrique; Ferreira da Silva-Júnior, Edeildo; Xavier de Araújo-Júnior, João; Aquino, Thiago Mendonça de; Adélia Nogueira Ribeiro, Êurica; da Silva Solon, Lilian Grace; Soares Aragão, Cícero Flávio; Barreto Gomes, Ana Paula

    2017-08-30

    Guanylhydrazones are molecules with great pharmacological potential in various therapeutic areas, including antitumoral activity. Factorial design is an excellent tool in the optimization of a chromatographic method, because it is possible quickly change factors such as temperature, mobile phase composition, mobile phase pH, column length, among others to establish the optimal conditions of analysis. The aim of the present work was to develop and validate a HPLC and UHPLC methods for the simultaneous determination of guanylhydrazones with anticancer activity employing experimental design. Precise, exact, linear and robust HPLC and UHPLC methods were developed and validated for the simultaneous quantification of the guanylhydrazones LQM10, LQM14, and LQM17. The UHPLC method was more economic, with a four times less solvent consumption, and 20 times less injection volume, what allowed better column performance. Comparing the empirical approach employed in the HPLC method development to the DoE approach employed in the UHPLC method development, we can conclude that the factorial design made the method development faster, more practical and rational. This resulted in methods that can be employed in the analysis, evaluation and quality control of these new synthetic guanylhydrazones.

  7. [Reliability and validity of the Chinese version on Comprehensive Scores for Financial Toxicity based on the patient-reported outcome measures].

    PubMed

    Yu, H H; Bi, X; Liu, Y Y

    2017-08-10

    Objective: To evaluate the reliability and validity of the Chinese version on comprehensive scores for financial toxicity (COST), based on the patient-reported outcome measures. Methods: A total of 118 cancer patients were face-to-face interviewed by well-trained investigators. Cronbach's α and Pearson correlation coefficient were used to evaluate reliability. Content validity index (CVI) and exploratory factor analysis (EFA) were used to evaluate the content validity and construct validity, respectively. Results: The Cronbach's α coefficient appeared as 0.889 for the whole questionnaire, with the results of test-retest were between 0.77 and 0.98. Scale-content validity index (S-CVI) appeared as 0.82, with item-content validity index (I-CVI) between 0.83 and 1.00. Two components were extracted from the Exploratory factor analysis, with cumulative rate as 68.04% and loading>0.60 on every item. Conclusion: The Chinese version of COST scale showed high reliability and good validity, thus can be applied to assess the financial situation in cancer patients.

  8. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  9. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol.

    PubMed

    Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-08-21

    The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Experimental validation of calculated atomic charges in ionic liquids

    NASA Astrophysics Data System (ADS)

    Fogarty, Richard M.; Matthews, Richard P.; Ashworth, Claire R.; Brandt-Talbot, Agnieszka; Palgrave, Robert G.; Bourne, Richard A.; Vander Hoogerstraete, Tom; Hunt, Patricia A.; Lovelock, Kevin R. J.

    2018-05-01

    A combination of X-ray photoelectron spectroscopy and near edge X-ray absorption fine structure spectroscopy has been used to provide an experimental measure of nitrogen atomic charges in nine ionic liquids (ILs). These experimental results are used to validate charges calculated with three computational methods: charges from electrostatic potentials using a grid-based method (ChelpG), natural bond orbital population analysis, and the atoms in molecules approach. By combining these results with those from a previous study on sulfur, we find that ChelpG charges provide the best description of the charge distribution in ILs. However, we find that ChelpG charges can lead to significant conformational dependence and therefore advise that small differences in ChelpG charges (<0.3 e) should be interpreted with care. We use these validated charges to provide physical insight into nitrogen atomic charges for the ILs probed.

  11. Validation of Physics Standardized Test Items

    NASA Astrophysics Data System (ADS)

    Marshall, Jill

    2008-10-01

    The Texas Physics Assessment Team (TPAT) examined the Texas Assessment of Knowledge and Skills (TAKS) to determine whether it is a valid indicator of physics preparation for future course work and employment, and of the knowledge and skills needed to act as an informed citizen in a technological society. We categorized science items from the 2003 and 2004 10th and 11th grade TAKS by content area(s) covered, knowledge and skills required to select the correct answer, and overall quality. We also analyzed a 5000 student sample of item-level results from the 2004 11th grade exam using standard statistical methods employed by test developers (factor analysis and Item Response Theory). Triangulation of our results revealed strengths and weaknesses of the different methods of analysis. The TAKS was found to be only weakly indicative of physics preparation and we make recommendations for increasing the validity of standardized physics testing..

  12. Validated spectrofluorimetric method for determination of two phosphodiesterase inhibitors tadalafil and vardenafil in pharmaceutical preparations and spiked human plasma.

    PubMed

    Abu El-Enin, Mohammed Abu Bakr; Al-Ghaffar Hammouda, Mohammed El-Sayed Abd; El-Sherbiny, Dina Tawfik; El-Wasseef, Dalia Rashad; El-Ashry, Saadia Mahmoud

    2016-02-01

    A valid, sensitive and rapid spectrofluorimetric method has been developed and validated for determination of both tadalafil (TAD) and vardenafil (VAR) either in their pure form, in their tablet dosage forms or spiked in human plasma. This method is based on measurement of the native fluorescence of both drugs in acetonitrile at λem 330 and 470 nm after excitation at 280 and 275 nm for tadalafil and vardenafil, respectively. Linear relationships were obtained over the concentration range 4-40 and 10-250 ng/mL with a minimum detection of 1 and 3 ng/mL for tadalafil and vardenafil, respectively. Various experimental parameters affecting the fluorescence intensity were carefully studied and optimized. The developed method was applied successfully for the determination of tadalafil and vardenafil in bulk drugs and tablet dosage forms. Moreover, the high sensitivity of the proposed method permitted their determination in spiked human plasma. The developed method was validated in terms of specificity, linearity, lower limit of quantification (LOQ), lower limit of detection (LOD), precision and accuracy. The mean recoveries of the analytes in pharmaceutical preparations were in agreement with those obtained from the comparison methods, as revealed by statistical analysis of the obtained results using Student's t-test and the variance ratio F-test. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Simultaneous determination of centchroman and tamoxifen along with their metabolites in rat plasma using LC-MS/MS.

    PubMed

    Rama Raju, Kanumuri Siva; Taneja, Isha; Singh, Sheelendra Pratap; Tripathi, Amit; Mishra, Durga Prasad; Hussain, K Mahaboob; Gayen, Jiaur Rahman; Singh, Shio Kumar; Wahajuddin, Muhammad

    2015-01-01

    Tamoxifen and centchroman are two non-steroidal, selective estrogen receptors modulators, intended for long term therapy in the woman. Because of their wide spread use, there is a possibility of co-prescription of these agents. We studied the probable pharmacokinetic interaction between these agents in breast cancer model rats. A simple, sensitive and rapid LC-ESI-MS/MS method was developed and validated for the simultaneous determination of tamoxifen, centchroman and their active metabolites. The method was linear over a range of 0.2-200 ng/ml. All validation parameters met the acceptance criteria according to regulatory guidelines. LC-MS/MS method for determination of tamoxifen, centchroman and their metabolites was developed and validated. Results show the potential of drug-drug interaction upon co-administration these two marketed drugs.

  14. Evaluation of objectivity, reliability and criterion validity of the key indicator method for manual handling operations (KIM-MHO), draft 2007.

    PubMed

    Klußmann, André; Gebhardt, Hansjürgen; Rieger, Monika; Liebers, Falk; Steinberg, Ulf

    2012-01-01

    Upper extremity musculoskeletal symptoms and disorders are common in the working population. The economic and social impact of such disorders is considerable. Long-time, dynamic repetitive exposure of the hand-arm system during manual handling operations (MHO) alone or in combination with static and postural effort are recognised as causes of musculoskeletal symptoms and disorders. The assessment of these manual work tasks is crucial to estimate health risks of exposed employees. For these work tasks, a new method for the assessment of the working conditions was developed and a validation study was performed. The results suggest satisfying criterion validity and moderate objectivity of the KIM-MHO draft 2007. The method was modified and evaluated again. It is planned to release a new version of KIM-MHO in spring 2012.

  15. UNCLES: method for the identification of genes differentially consistently co-expressed in a specific subset of datasets.

    PubMed

    Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K

    2015-06-04

    Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.

  16. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  17. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. A Complex Systems Approach to Causal Discovery in Psychiatry.

    PubMed

    Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin

    2016-01-01

    Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.

  19. Face, content, and construct validity of human placenta as a haptic training tool in neurointerventional surgery.

    PubMed

    Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del

    2016-05-01

    OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.

  20. Validation of Self-Report on Smoking among University Students in Korea

    ERIC Educational Resources Information Center

    Lee, Chung Yul; Shin, Sunmi; Lee, Hyeon Kyeong; Hong, Yoon Mi

    2009-01-01

    Objective: To validate the self-reported smoking status of Korean university students. Methods: Subjects included 322 Korean university in Korea, who participated in an annual health screening. Data on smoking were collected through a self-reported questionnaire and urine test. The data were analyzed by the McNemar test. Results: In the…

  1. Development and Validation of the Spanish-English Language Proficiency Scale (SELPS)

    ERIC Educational Resources Information Center

    Smyk, Ekaterina; Restrepo, M. Adelaida; Gorin, Joanna S.; Gray, Shelley

    2013-01-01

    Purpose: This study examined the development and validation of a criterion-referenced Spanish-English Language Proficiency Scale (SELPS) that was designed to assess the oral language skills of sequential bilingual children ages 4-8. This article reports results for the English proficiency portion of the scale. Method: The SELPS assesses syntactic…

  2. Stages of Psychometric Measure Development: The Example of the Generalized Expertise Measure (GEM)

    ERIC Educational Resources Information Center

    Germain, Marie-Line

    2006-01-01

    This paper chronicles the steps, methods, and presents hypothetical results of quantitative and qualitative studies being conducted to develop a Generalized Expertise Measure (GEM). Per Hinkin (1995), the stages of scale development are domain and item generation, content expert validation, and pilot test. Content/face validity and internal…

  3. In vivo measurement of aerodynamic weight support in freely flying birds

    NASA Astrophysics Data System (ADS)

    Lentink, David; Haselsteiner, Andreas; Ingersoll, Rivers

    2014-11-01

    Birds dynamically change the shape of their wing during the stroke to support their body weight aerodynamically. The wing is partially folded during the upstroke, which suggests that the upstroke of birds might not actively contribute to aerodynamic force production. This hypothesis is supported by the significant mass difference between the large pectoralis muscle that powers the down-stroke and the much smaller supracoracoideus that drives the upstroke. Previous works used indirect or incomplete techniques to measure the total force generated by bird wings ranging from muscle force, airflow, wing surface pressure, to detailed kinematics measurements coupled with bird mass-distribution models to derive net force through second derivatives. We have validated a new method that measures aerodynamic force in vivo time-resolved directly in freely flying birds which can resolve this question. The validation of the method, using independent force measurements on a quadcopter with pulsating thrust, show the aerodynamic force and impulse are measured within 2% accuracy and time-resolved. We demonstrate results for quad-copters and birds of similar weight and size. The method is scalable and can be applied to both engineered and natural flyers across taxa. The first author invented the method, the second and third authors validated the method and present results for quadcopters and birds.

  4. Validation of a two-dimensional liquid chromatography method for quality control testing of pharmaceutical materials.

    PubMed

    Yang, Samuel H; Wang, Jenny; Zhang, Kelly

    2017-04-07

    Despite the advantages of 2D-LC, there is currently little to no work in demonstrating the suitability of these 2D-LC methods for use in a quality control (QC) environment for good manufacturing practice (GMP) tests. This lack of information becomes more critical as the availability of commercial 2D-LC instrumentation has significantly increased, and more testing facilities begin to acquire these 2D-LC capabilities. It is increasingly important that the transferability of developed 2D-LC methods be assessed in terms of reproducibility, robustness and performance across different laboratories worldwide. The work presented here focuses on the evaluation of a heart-cutting 2D-LC method used for the analysis of a pharmaceutical material, where a key, co-eluting impurity in the first dimension ( 1 D) is resolved from the main peak and analyzed in the second dimension ( 2 D). A design-of-experiments (DOE) approach was taken in the collection of the data, and the results were then modeled in order to evaluate method robustness using statistical modeling software. This quality by design (QBD) approach gives a deeper understanding of the impact of these 2D-LC critical method attributes (CMAs) and how they affect overall method performance. Although there are multiple parameters that may be critical from method development point of view, a special focus of this work is devoted towards evaluation of unique 2D-LC critical method attributes from method validation perspective that transcend conventional method development and validation. The 2D-LC method attributes are evaluated for their recovery, peak shape, and resolution of the two co-eluting compounds in question on the 2 D. In the method, linearity, accuracy, precision, repeatability, and sensitivity are assessed along with day-to-day, analyst-to-analyst, and lab-to-lab (instrument-to-instrument) assessments. The results of this validation study demonstrate that the 2D-LC method is accurate, sensitive, and robust and is ultimately suitable for QC testing with good method transferability. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. EPA (ENVIRONMENTAL PROTECTION AGENCY) METHOD STUDY 28, PCB'S (POLYCHLORINATED BIPHENYLS) IN OIL

    EPA Science Inventory

    This report describes the experimental design and the results of the validation study for two analytical methods to detect polychlorinated byphenyls in oil. The methods analyzed for four PCB Aroclors (1016, 1242, 1254, and 1260), 2-chlorobiphenyl, and decachlorobiphenyl. The firs...

  6. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  7. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    PubMed Central

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Results Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Conclusions Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication. PMID:19212468

  8. Using Lunar Observations to Validate In-Flight Calibrations of Clouds and Earth Radiant Energy System Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.

  9. USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard Schultz

    2012-09-01

    A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less

  10. Predicting non-familial major physical violent crime perpetration in the U.S. Army from administrative data

    PubMed Central

    Rosellini, Anthony J.; Monahan, John; Street, Amy E.; Heeringa, Steven G.; Hill, Eric D.; Petukhova, Maria; Reis, Ben Y.; Sampson, Nancy A.; Bliese, Paul; Schoenbaum, Michael; Stein, Murray B.; Ursano, Robert; Kessler, Ronald C.

    2016-01-01

    BACKGROUND Although interventions exist to reduce violent crime, optimal implementation requires accurate targeting. We report the results of an attempt to develop an actuarial model using machine learning methods to predict future violent crimes among U.S. Army soldiers. METHODS A consolidated administrative database for all 975,057 soldiers in the U.S. Army in 2004-2009 was created in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). 5,771 of these soldiers committed a first founded major physical violent crime (murder-manslaughter, kidnapping, aggravated arson, aggravated assault, robbery) over that time period. Temporally prior administrative records measuring socio-demographic, Army career, criminal justice, medical/pharmacy, and contextual variables were used to build an actuarial model for these crimes separately among men and women using machine learning methods (cross-validated stepwise regression; random forests; penalized regressions). The model was then validated in an independent 2011-2013 sample. RESULTS Key predictors were indicators of disadvantaged social/socio-economic status, early career stage, prior crime, and mental disorder treatment. Area under the receiver operating characteristic curve was .80-.82 in 2004-2009 and .77 in a 2011-2013 validation sample. 36.2-33.1% (male-female) of all administratively-recorded crimes were committed by the 5% of soldiers having highest predicted risk in 2004-2009 and an even higher proportion (50.5%) in the 2011-2013 validation sample. CONCLUSIONS Although these results suggest that the models could be used to target soldiers at high risk of violent crime perpetration for preventive interventions, final implementation decisions would require further validation and weighing of predicted effectiveness against intervention costs and competing risks. PMID:26436603

  11. Psychometric Properties of the Persian Version of Self-Transcendence Scale: Adolescent Version

    PubMed Central

    Farahani, Azam Shirinabadi; Rassouli, Maryam; Yaghmaie, Farideh; Majd, Hamid Alavi; Sajjadi, Moosa

    2016-01-01

    Background: Given the greater tendency during adolescence toward risk-taking, identifying and measuring the factors affecting the adolescents’ health is highly important to ensure the efficacy of health promoting interventions. One of these factors is self-transcendence. The aim of this study was to assess the psychometric features of the Self-Transcendence Scale (adolescents’ version) in students in Tehran, the capital city of Iran. Methods: This research was conducted in 2015. For this purpose, 1210 high school students were selected through the multistage cluster sampling method. After the backward-forward translation, the psychometric properties of the scale were examined through the assessment of the (face and construct) validity and reliability (internal consistency and stability) of the scale. The construct validity was assessed using two methods, factor analysis, and convergence of the scale with the Hopefulness Scale for Adolescents. Results: The result of face validity was minor modifications in some words. The exploratory factor analysis resulted in the extraction of two dimensions, with explaining 52.79% of the variance collectively. In determining the convergent validity, the correlation between hopefulness score and self-transcendence score was r=0.47 (P<0.001). The internal consistency of the scale was determined using Cronbach’s alpha of 0.82 for the whole scale and 0.75 and 0.70 for each of the sub-scales. The stability reliability was found to have an ICC of 0.86 and a confidence interval of 95%. Conclusion: The Persian version of the Adolescents’ Self-Transcendence Scale showed an acceptable validity and reliability and can be used in the assessment of self-transcendence in Iranian adolescents. PMID:27218113

  12. The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.

    PubMed

    Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel

    2009-01-01

    This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.

  13. The Use of Virtual Reality in the Study of People's Responses to Violent Incidents

    PubMed Central

    Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel

    2009-01-01

    This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762

  14. Development and validation of a matrix solid-phase dispersion method to determine acrylamide in coffee and coffee substitutes.

    PubMed

    Soares, Cristina M Dias; Alves, Rita C; Casal, Susana; Oliveira, M Beatriz P P; Fernandes, José Oliveira

    2010-04-01

    The present study describes the development and validation of a new method based on a matrix solid-phase dispersion (MSPD) sample preparation procedure followed by GC-MS for determination of acrylamide levels in coffee (ground coffee and brewed coffee) and coffee substitute samples. Samples were dispersed in C(18) sorbent and the mixture was further packed into a preconditioned custom-made ISOLUTE bilayered SPE column (C(18)/Multimode; 1 g + 1 g). Acrylamide was subsequently eluted with water, and then derivatized with bromine and quantified by GC-MS in SIM mode. The MSPD/GC-MS method presented a LOD of 5 microg/kg and a LOQ of 10 microg/kg. Intra and interday precisions ranged from 2% to 4% and 4% to 10%, respectively. To evaluate the performance of the method, 11 samples of ground and brewed coffee and coffee substitutes were simultaneously analyzed by the developed method and also by a previously validated method based in a liquid-extraction (LE) procedure, and the results were compared showing a high correlation between them.

  15. Development and Validation of Stability Indicating Spectroscopic Method for Content Analysis of Ceftriaxone Sodium in Pharmaceuticals

    PubMed Central

    Ethiraj, Revathi; Thiruvengadam, Ethiraj; Sampath, Venkattapuram Saravanan; Vahid, Abdul; Raj, Jithin

    2014-01-01

    A simple, selective, and stability indicating spectroscopic method has been selected and validated for the assay of ceftriaxone sodium in the powder for injection dosage forms. Proposed method is based on the measurement of absorbance of ceftriaxone sodium in aqueous medium at 241 nm. The method obeys Beer's law in the range of 5–50 μg/mL with correlation coefficient of 0.9983. Apparent molar absorptivity and Sandell's sensitivity were found to be 2.046 × 103 L mol−1 cm−1 and 0.02732 μg/cm2/0.001 absorbance units. This study indicated that ceftriaxone sodium was degraded in acid medium and also underwent oxidative degradation. Percent relative standard deviation associated with all the validation parameters was less than 2, showing compliance with acceptance criteria of Q2 (R1), International Conference on Harmonization (2005) guidelines. Then the proposed method was successfully applied to the determination of ceftriaxone sodium in sterile preparation and results were comparable with reported methods. PMID:27355020

  16. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.

  17. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  18. [Method for evaluating the competence of specialists--the validation of 360-degree-questionnaire].

    PubMed

    Nørgaard, Kirsten; Pedersen, Juri; Ravn, Lisbeth; Albrecht-Beste, Elisabeth; Holck, Kim; Fredløv, Maj; Møller, Lars Krag

    2010-04-19

    Assessment of physicians' performance focuses on the quality of their work. The aim of this study was to develop a valid, usable and acceptable multisource feedback assessment tool (MFAT) for hospital consultants. Statements were produced on consultant competencies within non-medical areas like collaboration, professionalism, communication, health promotion, academics and administration. The statements were validated by physicians and later by non-physician professionals after adjustments had been made. In a pilot test, a group of consultants was assessed using the final collection of statements of the MFAT. They received a report with their personal results and subsequently evaluated the assessment method. In total, 66 statements were developed and after validation they were reduced and reformulated to 35. Mean scores for relevance and "easy to understand" of the statements were in the range between "very high degree" and "high degree". In the pilot test, 18 consultants were assessed by themselves, by 141 other physicians and by 125 other professionals in the hospital. About two thirds greatly benefited of the assessment report and half identified areas for personal development. About a third did not want the head of their department to know the assessment results directly; however, two thirds found a potential value in discussing the results with the head. We developed an MFAT for consultants with relevant and understandable statements. A pilot test confirmed that most of the consultants gained from the assessment, but some did not like to share their results with their heads. For these specialists other methods should be used.

  19. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  1. Development and validation of a food frequency questionnaire for dietary intake assessment among multi-ethnic primary school-aged children

    PubMed Central

    Fatihah, Fadil; Ng, Boon Koon; Hazwanie, Husin; Norimah, A Karim; Shanita, Safii Nik; Ruzita, Abd Talib; Poh, Bee Koon

    2015-01-01

    INTRODUCTION This study aimed to develop and validate a food frequency questionnaire (FFQ) to assess habitual diets of multi-ethnic Malaysian children aged 7–12 years. METHODS A total of 236 primary school children participated in the development of the FFQ and 209 subjects participated in the validation study, with a subsample of 30 subjects participating in the reproducibility study. The FFQ, consisting of 94 food items from 12 food groups, was compared with a three-day dietary record (3DR) as the reference method. The reproducibility of the FFQ was assessed through repeat administration (FFQ2), seven days after the first administration (FFQ1). RESULTS The results of the validation study demonstrated good acceptance of the FFQ. Mean intake of macronutrients in FFQ1 and 3DR correlated well, although the FFQ intake data tended to be higher. Cross-classification of nutrient intake between the two methods showed that < 7% of subjects were grossly misclassified. Moderate correlations noted between the two methods ranged from r = 0.310 (p < 0.001) for fat to r = 0.497 (p < 0.001) for energy. The reproducibility of the FFQ, as assessed by Cronbach’s alpha, ranged from 0.61 (protein) to 0.70 (energy, carbohydrates and fat). Spearman’s correlations between FFQ1 and FFQ2 ranged from rho = 0.333 (p = 0.072) for protein to rho = 0.479 (p < 0.01) for fat. CONCLUSION These findings indicate that the FFQ is valid and reliable for measuring the average intake of energy and macronutrients in a population of multi-ethnic children aged 7–12 years in Malaysia. PMID:26702165

  2. Development and full validation of an UPLC-MS/MS method for the determination of an anti-allergic indolinone derivative in rat plasma, and application to a preliminary pharmacokinetic study.

    PubMed

    Oufir, Mouhssin; Sampath, Chethan; Butterweck, Veronika; Hamburger, Matthias

    2012-08-01

    The natural product (E,Z)-3-(4-hydroxy-3,5-dimethoxybenzylidene)indolin-2-one (indolinone) was identified some years ago as a nanomolar inhibitor of FcɛRI-receptor dependent mast cell degranulation. To further explore the potential of the compound, we established an UPLC-MS/MS assay for dosage in rat plasma. The method was fully validated according to FDA Guidance for industry. Results of this validation and long term stability study demonstrate that the method in lithium heparinized rat plasma is specific, accurate, precise and capable of producing reliable results according to recommendations of international guidelines. The method was validated with a LLOQ of 30.0 ng/mL and an ULOQ of 3000 ng/mL. The response versus concentration data were fitted with a first order polynomial with 1/X(2) weighting. No matrix effect was observed when using three independent sources of rat plasma. The average extraction recovery was consistent over the investigated range. This validation in rat plasma demonstrated that indolinone was stable for 190 days when stored below -65 °C; for 4 days at 10 °C in the autosampler; for 4h at RT, and during three successive freeze/thaw cycles at -65 °C. Preliminary pharmacokinetic data were obtained in male Sprague-Dawley rats (2 mg/kg BW i.v.). Blood samples taken from 0 to 12 h after injection were collected, and data analyzed with WinNonlin. A short half-life (4.30±0.14 min) and a relatively high clearance (3.83±1.46 L/h/kg) were found. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Measurement properties of existing clinical assessment methods evaluating scapular positioning and function. A systematic review.

    PubMed

    Larsen, Camilla Marie; Juul-Kristensen, Birgit; Lund, Hans; Søgaard, Karen

    2014-10-01

    The aims were to compile a schematic overview of clinical scapular assessment methods and critically appraise the methodological quality of the involved studies. A systematic, computer-assisted literature search using Medline, CINAHL, SportDiscus and EMBASE was performed from inception to October 2013. Reference lists in articles were also screened for publications. From 50 articles, 54 method names were identified and categorized into three groups: (1) Static positioning assessment (n = 19); (2) Semi-dynamic (n = 13); and (3) Dynamic functional assessment (n = 22). Fifteen studies were excluded for evaluation due to no/few clinimetric results, leaving 35 studies for evaluation. Graded according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN checklist), the methodological quality in the reliability and validity domains was "fair" (57%) to "poor" (43%), with only one study rated as "good". The reliability domain was most often investigated. Few of the assessment methods in the included studies that had "fair" or "good" measurement property ratings demonstrated acceptable results for both reliability and validity. We found a substantially larger number of clinical scapular assessment methods than previously reported. Using the COSMIN checklist the methodological quality of the included measurement properties in the reliability and validity domains were in general "fair" to "poor". None were examined for all three domains: (1) reliability; (2) validity; and (3) responsiveness. Observational evaluation systems and assessment of scapular upward rotation seem suitably evidence-based for clinical use. Future studies should test and improve the clinimetric properties, and especially diagnostic accuracy and responsiveness, to increase utility for clinical practice.

  4. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  5. An empirical assessment of validation practices for molecular classifiers

    PubMed Central

    Castaldi, Peter J.; Dahabreh, Issa J.

    2011-01-01

    Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697

  6. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  7. Method of Analysis by the U.S. Geological Survey California District Sacramento Laboratory-- Determination of Dissolved Organic Carbon in Water by High Temperature Catalytic Oxidation, Method Validation, and Quality-Control Practices

    USGS Publications Warehouse

    Bird, Susan M.; Fram, Miranda S.; Crepeau, Kathryn L.

    2003-01-01

    An analytical method has been developed for the determination of dissolved organic carbon concentration in water samples. This method includes the results of the tests used to validate the method and the quality-control practices used for dissolved organic carbon analysis. Prior to analysis, water samples are filtered to remove suspended particulate matter. A Shimadzu TOC-5000A Total Organic Carbon Analyzer in the nonpurgeable organic carbon mode is used to analyze the samples by high temperature catalytic oxidation. The analysis usually is completed within 48 hours of sample collection. The laboratory reporting level is 0.22 milligrams per liter.

  8. Simultaneous determination of some cholesterol-lowering drugs in their binary mixture by novel spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem

    2013-09-01

    Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.

  9. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Assessment of Lower Limb Muscle Strength and Power Using Hand-Held and Fixed Dynamometry: A Reliability and Validity Study

    PubMed Central

    Perraton, Luke G.; Bower, Kelly J.; Adair, Brooke; Pua, Yong-Hao; Williams, Gavin P.; McGaw, Rebekah

    2015-01-01

    Introduction Hand-held dynamometry (HHD) has never previously been used to examine isometric muscle power. Rate of force development (RFD) is often used for muscle power assessment, however no consensus currently exists on the most appropriate method of calculation. The aim of this study was to examine the reliability of different algorithms for RFD calculation and to examine the intra-rater, inter-rater, and inter-device reliability of HHD as well as the concurrent validity of HHD for the assessment of isometric lower limb muscle strength and power. Methods 30 healthy young adults (age: 23±5yrs, male: 15) were assessed on two sessions. Isometric muscle strength and power were measured using peak force and RFD respectively using two HHDs (Lafayette Model-01165 and Hoggan microFET2) and a criterion-reference KinCom dynamometer. Statistical analysis of reliability and validity comprised intraclass correlation coefficients (ICC), Pearson correlations, concordance correlations, standard error of measurement, and minimal detectable change. Results Comparison of RFD methods revealed that a peak 200ms moving window algorithm provided optimal reliability results. Intra-rater, inter-rater, and inter-device reliability analysis of peak force and RFD revealed mostly good to excellent reliability (coefficients ≥ 0.70) for all muscle groups. Concurrent validity analysis showed moderate to excellent relationships between HHD and fixed dynamometry for the hip and knee (ICCs ≥ 0.70) for both peak force and RFD, with mostly poor to good results shown for the ankle muscles (ICCs = 0.31–0.79). Conclusions Hand-held dynamometry has good to excellent reliability and validity for most measures of isometric lower limb strength and power in a healthy population, particularly for proximal muscle groups. To aid implementation we have created freely available software to extract these variables from data stored on the Lafayette device. Future research should examine the reliability and validity of these variables in clinical populations. PMID:26509265

  11. Assessment of Interobserver Reliability in Nutrition Studies that Use Direct Observation of School Meals

    PubMed Central

    BAGLIO, MICHELLE L.; BAXTER, SUZANNE DOMEL; GUINN, CAROLINE H.; THOMPSON, WILLIAM O.; SHAFFER, NICOLE M.; FRYE, FRANCESCA H. A.

    2005-01-01

    This article (a) provides a general review of interobserver reliability (IOR) and (b) describes our method for assessing IOR for items and amounts consumed during school meals for a series of studies regarding the accuracy of fourth-grade children's dietary recalls validated with direct observation of school meals. A widely used validation method for dietary assessment is direct observation of meals. Although many studies utilize several people to conduct direct observations, few published studies indicate whether IOR was assessed. Assessment of IOR is necessary to determine that the information collected does not depend on who conducted the observation. Two strengths of our method for assessing IOR are that IOR was assessed regularly throughout the data collection period and that IOR was assessed for foods at the item and amount level instead of at the nutrient level. Adequate agreement among observers is essential to the reasoning behind using observation as a validation tool. Readers are encouraged to question the results of studies that fail to mention and/or to include the results for assessment of IOR when multiple people have conducted observations. PMID:15354155

  12. Validated spectrophotometric methods for simultaneous determination of troxerutin and carbazochrome in dosage form

    NASA Astrophysics Data System (ADS)

    Khattab, Fatma I.; Ramadan, Nesrin K.; Hegazy, Maha A.; Al-Ghobashy, Medhat A.; Ghoniem, Nermine S.

    2015-03-01

    Four simple, accurate, sensitive and precise spectrophotometric methods were developed and validated for simultaneous determination of Troxerutin (TXN) and Carbazochrome (CZM) in their bulk powders, laboratory prepared mixtures and pharmaceutical dosage forms. Method A is first derivative spectrophotometry (D1) where TXN and CZM were determined at 294 and 483.5 nm, respectively. Method B is first derivative of ratio spectra (DD1) where the peak amplitude at 248 for TXN and 439 nm for CZM were used for their determination. Method C is ratio subtraction (RS); in which TXN was determined at its λmax (352 nm) in the presence of CZM which was determined by D1 at 483.5 nm. While, method D is mean centering of the ratio spectra (MCR) in which the mean centered values at 300 nm and 340.0 nm were used for the two drugs in a respective order. The two compounds were simultaneously determined in the concentration ranges of 5.00-50.00 μg mL-1 and 0.5-10.0 μg mL-1 for TXN and CZM, respectively. The methods were validated according to the ICH guidelines and the results were statistically compared to the manufacturer's method.

  13. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  14. Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.

    2015-12-01

    The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (< 1 s) components, respectively, using the 3-D velocity structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.

  15. Hyperspectral Image Classification for Land Cover Based on an Improved Interval Type-II Fuzzy C-Means Approach

    PubMed Central

    Li, Zhao-Liang

    2018-01-01

    Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548

  16. [Finite Element Modelling of the Eye for the Investigation of Accommodation].

    PubMed

    Martin, H; Stachs, O; Guthoff, R; Grabow, N

    2016-12-01

    Background: Accommodation research increasingly uses engineering methods. This article presents the use of the finite element method in accommodation research. Material and Methods: Geometry, material data and boundary conditions are prerequisites for the application of the finite element method. Published data on geometry and materials are reviewed. It is shown how boundary conditions are important and how they influence the results. Results: Two dimensional and three dimensional models of the anterior chamber of the eye are presented. With simple two dimensional models, it is shown that realistic results for the accommodation amplitude can always be achieved. More complex three dimensional models of the accommodation mechanism - including the ciliary muscle - require further investigations of the material data and of the morphology of the ciliary muscle, if they are to achieve realistic results for accommodation. Discussion and Conclusion: The efficiency and the limitations of the finite element method are especially clear for accommodation. Application of the method requires extensive preparation, including acquisition of geometric and material data and experimental validation. However, a validated model can be used as a basis for parametric studies, by systematically varying material data and geometric dimensions. This allows systematic investigation of how essential input parameters influence the results. Georg Thieme Verlag KG Stuttgart · New York.

  17. Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.

    PubMed

    Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit

    2007-10-11

    Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.

  18. Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders

    PubMed Central

    Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit

    2007-01-01

    Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920

  19. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  20. Development and Content Validation of the Transition Readiness Inventory Item Pool for Adolescent and Young Adult Survivors of Childhood Cancer.

    PubMed

    Schwartz, Lisa A; Hamilton, Jessica L; Brumley, Lauren D; Barakat, Lamia P; Deatrick, Janet A; Szalda, Dava E; Bevans, Katherine B; Tucker, Carole A; Daniel, Lauren C; Butler, Eliana; Kazak, Anne E; Hobbie, Wendy L; Ginsberg, Jill P; Psihogios, Alexandra M; Ver Hoeve, Elizabeth; Tuchman, Lisa K

    2017-10-01

    The development of the Transition Readiness Inventory (TRI) item pool for adolescent and young adult childhood cancer survivors is described, aiming to both advance transition research and provide an example of the application of NIH Patient Reported Outcomes Information System methods. Using rigorous measurement development methods including mixed methods, patient and parent versions of the TRI item pool were created based on the Social-ecological Model of Adolescent and young adult Readiness for Transition (SMART). Each stage informed development and refinement of the item pool. Content validity ratings and cognitive interviews resulted in 81 content valid items for the patient version and 85 items for the parent version. TRI represents the first multi-informant, rigorously developed transition readiness item pool that comprehensively measures the social-ecological components of transition readiness. Discussion includes clinical implications, the application of TRI and the methods to develop the item pool to other populations, and next steps for further validation and refinement. © The Author 2017. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. Development and validation of RP HPLC method to determine nandrolone phenylpropionate in different pharmaceutical formulations.

    PubMed

    Mukherjee, Jayanti; Das, Ayan; Chakrabarty, Uday Sankar; Sahoo, Bijay Kumar; Dey, Goutam; Choudhury, Hira; Pal, Tapan Kumar

    2011-01-01

    This study describes development and subsequent validation of a reversed phase high performance liquid chromatographic (RP-HPLC) method for the estimation of nandrolone phenylpropionate, an anabolic steroid, in bulk drug, in conventional parenteral dosage formulation and in prepared nanoparticle dosage form. The chromatographic system consisted of a Luna Phenomenex, CN (250 mm x 4.6 mm, 5 microm) column, an isocratic mobile phase comprising 10 mM phosphate buffer and acetonitrile (50:50, v/v) and UV detection at 240 nm. Nandrolone phenylpropionate was eluted about 6.3 min with no interfering peaks of excipients used for the preparation of dosage forms. The method was linear over the range from 0.050 to 25 microg/mL in raw drug (r2 = 0.9994). The intra-day and inter-day precision values were in the range of 0.219-0.609% and 0.441-0.875%, respectively. Limits of detection and quantitation were 0.010 microg/mL and 0.050 microg/mL, respectively. The results were validated according to International Conference on Harmonization (ICH) guidelines in parenteral and prepared nanoparticle formulation. The validated HPLC method is simple, sensitive, precise, accurate and reproducible.

  2. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  3. Development of a Research Methods and Statistics Concept Inventory

    ERIC Educational Resources Information Center

    Veilleux, Jennifer C.; Chapman, Kate M.

    2017-01-01

    Research methods and statistics are core courses in the undergraduate psychology major. To assess learning outcomes, it would be useful to have a measure that assesses research methods and statistical literacy beyond course grades. In two studies, we developed and provided initial validation results for a research methods and statistical knowledge…

  4. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  5. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  6. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  8. Consortium on Methods Evaluating Tobacco: Research Tools to Inform FDA Regulation of Snus.

    PubMed

    Berman, Micah L; Bickel, Warren K; Harris, Andrew C; LeSage, Mark G; O'Connor, Richard J; Stepanov, Irina; Shields, Peter G; Hatsukami, Dorothy K

    2017-10-04

    The U.S. Food and Drug Administration (FDA) has purview over tobacco products. To set policy, the FDA must rely on sound science, yet most existing tobacco research methods have not been designed to specifically inform regulation. The NCI and FDA-funded Consortium on Methods Evaluating Tobacco (COMET) was established to develop and assess valid and reliable methods for tobacco product evaluation. The goal of this paper is to describe these assessment methods using a U.S. manufactured "snus" as the test product. In designing studies that could inform FDA regulation, COMET has taken a multidisciplinary approach that includes experimental animal models and a range of human studies that examine tobacco product appeal, addictiveness, and toxicity. This paper integrates COMET's findings over the last 4 years. Consistency in results was observed across the various studies, lending validity to our methods. Studies showed low abuse liability for snus and low levels of consumer demand. Toxicity was less than cigarettes on some biomarkers but higher than medicinal nicotine. Using our study methods and the convergence of results, the snus that we tested as a potential modified risk tobacco product is likely to neither result in substantial public health harm nor benefit. This review describes methods that were used to assess the appeal, abuse liability, and toxicity of snus. These methods included animal, behavioral economics, and consumer perception studies, and clinical trials. Across these varied methods, study results showed low abuse-liability and appeal of the snus product we tested. In several studies, demand for snus was lower than for less toxic nicotine gum. The consistency and convergence of results across a range of multi-disciplinary studies lends validity to our methods and suggests that promotion of snus as a modified risk tobacco products is unlikely to produce substantial public health benefit or harm. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. A Critical Review of Methods to Evaluate the Impact of FDA Regulatory Actions

    PubMed Central

    Briesacher, Becky A.; Soumerai, Stephen B.; Zhang, Fang; Toh, Sengwee; Andrade, Susan E.; Wagner, Joann L.; Shoaibi, Azadeh; Gurwitz, Jerry H.

    2013-01-01

    Purpose To conduct a synthesis of the literature on methods to evaluate the impacts of FDA regulatory actions, and identify best practices for future evaluations. Methods We searched MEDLINE for manuscripts published between January 1948 and August 2011 that included terms related to FDA, regulatory actions, and empirical evaluation; the review additionally included FDA-identified literature. We used a modified Delphi method to identify preferred methodologies. We included studies with explicit methods to address threats to validity, and identified designs and analytic methods with strong internal validity that have been applied to other policy evaluations. Results We included 18 studies out of 243 abstracts and papers screened. Overall, analytic rigor in prior evaluations of FDA regulatory actions varied considerably; less than a quarter of studies (22%) included control groups. Only 56% assessed changes in the use of substitute products/services, and 11% examined patient health outcomes. Among studies meeting minimal criteria of rigor, 50% found no impact or weak/modest impacts of FDA actions and 33% detected unintended consequences. Among those studies finding significant intended effects of FDA actions, all cited the importance of intensive communication efforts. There are preferred methods with strong internal validity that have yet to be applied to evaluations of FDA regulatory actions. Conclusions Rigorous evaluations of the impact of FDA regulatory actions have been limited and infrequent. Several methods with strong internal validity are available to improve trustworthiness of future evaluations of FDA policies. PMID:23847020

  10. Validation of an Empathy Scale in Pharmacy and Nursing Students

    PubMed Central

    Chen, Aleda M. H.; Yehle, Karen S.; Plake, Kimberly S.

    2013-01-01

    Objective. To validate an empathy scale to measure empathy in pharmacy and nursing students. Methods. A 15-item instrument comprised of the cognitive and affective empathy domains, was created. Each item was rated using a 7-point Likert scale, ranging from strongly disagree to strongly agree. Concurrent validity was demonstrated with the Jefferson Scale of Empathy – Health Professional Students (JSE-HPS). Results. Reliability analysis of data from 216 students (pharmacy, N=158; nursing, N=58) showed that scores on the empathy scale were positively associated with JSE-HPS scores (p<0.001). Factor analysis confirmed that 14 of the 15 items were significantly associated with their respective domain, but the overall instrument had limited goodness of fit. Conclusions. Results of this study demonstrate the reliability and validity of a new scale for evaluating student empathy. Further testing of the scale at other universities is needed to establish validity. PMID:23788805

  11. PSI-Center Simulations of Validation Platform Experiments

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2013-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  12. [Research on the reliability and validity of postural workload assessment method and the relation to work-related musculoskeletal disorders of workers].

    PubMed

    Qin, D L; Jin, X N; Wang, S J; Wang, J J; Mamat, N; Wang, F J; Wang, Y; Shen, Z A; Sheng, L G; Forsman, M; Yang, L Y; Wang, S; Zhang, Z B; He, L H

    2018-06-18

    To form a new assessment method to evaluate postural workload comprehensively analyzing the dynamic and static postural workload for workers during their work process to analyze the reliability and validity, and to study the relation between workers' postural workload and work-related musculoskeletal disorders (WMSDs). In the study, 844 workers from electronic and railway vehicle manufacturing factories were selected as subjects investigated by using the China Musculoskeletal Questionnaire (CMQ) to form the postural workload comprehensive assessment method. The Cronbach's α, cluster analysis and factor analysis were used to assess the reliability and validity of the new assessment method. Non-conditional Logistic regression was used to analyze the relation between workers' postural workload and WMSDs. Reliability of the assessment method for postural workload: internal consistency analysis results showed that Cronbach's α was 0.934 and the results of split-half reliability indicated that Spearman-Brown coefficient was 0.881 and the correlation coefficient between the first part and the second was 0.787. Validity of the assessment method for postural workload: the results of cluster analysis indicated that square Euclidean distance between dynamic and static postural workload assessment in the same part or work posture was the shortest. The results of factor analysis showed that 2 components were extracted and the cumulative percentage of variance achieved 65.604%. The postural workload score of the different occupational workers showed significant difference (P<0.05) by covariance analysis. The results of nonconditional Logistic regression indicated that alcohol intake (OR=2.141, 95%CI 1.337-3.428) and obesity (OR=3.408, 95%CI 1.629-7.130) were risk factors for WMSDs. The risk for WMSDs would rise as workers' postural workload rose (OR=1.035, 95%CI 1.022-1.048). There was significant different risk for WMSDs in the different groups of workers distinguished by work type, gender and age. Female workers exhibited a higher prevalence for WMSDs (OR=2.626, 95%CI 1.414-4.879) and workers between 30-40 years of age (OR=1.909, 95%CI 1.237-2.946) as compared with those under 30. This method for comprehensively assessing postural workload is reliable and effective when used in assembling workers, and there is certain relation between the postural workload and WMSDs.

  13. Establishing high resolution melting analysis: method validation and evaluation for c-RET proto-oncogene mutation screening.

    PubMed

    Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina

    2011-10-06

    Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.

  14. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  15. Revealed and stated preference valuation and transfer: A within-sample comparison of water quality improvement values

    NASA Astrophysics Data System (ADS)

    Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian

    2014-06-01

    Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.

  16. Alternative Methods of Accounting for Underreporting and Overreporting When Measuring Dietary Intake-Obesity Relations

    PubMed Central

    Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A

    2011-01-01

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  17. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.

  18. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  19. Assessment of dietary sodium intake using a food frequency questionnaire and 24-hour urinary sodium excretion: a systematic literature review.

    PubMed

    McLean, Rachael M; Farmer, Victoria L; Nettleton, Alice; Cameron, Claire M; Cook, Nancy R; Campbell, Norman R C

    2017-12-01

    Food frequency questionnaires (FFQs) are often used to assess dietary sodium intake, although 24-hour urinary excretion is the most accurate measure of intake. The authors conducted a systematic review to investigate whether FFQs are a reliable and valid way of measuring usual dietary sodium intake. Results from 18 studies are described in this review, including 16 validation studies. The methods of study design and analysis varied widely with respect to FFQ instrument, number of 24-hour urine collections collected per participant, methods used to assess completeness of urine collections, and statistical analysis. Overall, there was poor agreement between estimates from FFQ and 24-hour urine. The authors suggest a framework for validation and reporting based on a consensus statement (2004), and recommend that all FFQs used to estimate dietary sodium intake undergo validation against multiple 24-hour urine collections. ©2017 Wiley Periodicals, Inc.

  20. Evaluation of ultrasonic cavitation of metallic and non-metallic surfaces

    NASA Technical Reports Server (NTRS)

    Mehta, Narinder K.

    1992-01-01

    1,1,2 trichloro-1,2,2 trifluoro ethane (CFC-113) commercially known as Freon-113 is the primary test solvent used for validating the cleaned hardware at the Kennedy Space Center (KSC). Due to the ozone depletion problem, the current United States policy calls for the phase out of Freons by 1995. NASAs chlorofluorocarbon (CFC) replacement group at KSC has opted to use water as a replacement fluid for the validation process since water is non-toxic, inexpensive, and is environmentally friendly. The replacement validation method calls for the ultrasonification of the small parts with water at 52 C for a cycle or two of 10 min duration wash using commercial ultrasonic baths. In this project, experimental data was obtained to assess the applicability of the proposed validation method for any damage of the metallic and non-metallic surfaces resulting from ultrasonic cavitation.

  1. Translation of the Neck Disability Index and validation of the Greek version in a sample of neck pain patients

    PubMed Central

    Trouli, Marianna N; Vernon, Howard T; Kakavelakis, Kyriakos N; Antonopoulou, Maria D; Paganas, Aristofanis N; Lionis, Christos D

    2008-01-01

    Background Neck pain is a highly prevalent condition resulting in major disability. Standard scales for measuring disability in patients with neck pain have a pivotal role in research and clinical settings. The Neck Disability Index (NDI) is a valid and reliable tool, designed to measure disability in activities of daily living due to neck pain. The purpose of our study was the translation and validation of the NDI in a Greek primary care population with neck complaints. Methods The original version of the questionnaire was used. Based on international standards, the translation strategy comprised forward translations, reconciliation, backward translation and pre-testing steps. The validation procedure concerned the exploration of internal consistency (Cronbach alpha), test-retest reliability (Intraclass Correlation Coefficient, Bland and Altman method), construct validity (exploratory factor analysis) and responsiveness (Spearman correlation coefficient, Standard Error of Measurement and Minimal Detectable Change) of the questionnaire. Data quality was also assessed through completeness of data and floor/ceiling effects. Results The translation procedure resulted in the Greek modified version of the NDI. The latter was culturally adapted through the pre-testing phase. The validation procedure raised a large amount of missing data due to low applicability, which were assessed with two methods. Floor or ceiling effects were not observed. Cronbach alpha was calculated as 0.85, which was interpreted as good internal consistency. Intraclass correlation coefficient was found to be 0.93 (95% CI 0.84–0.97), which was considered as very good test-retest reliability. Factor analysis yielded one factor with Eigenvalue 4.48 explaining 44.77% of variance. The Spearman correlation coefficient (0.3; P = 0.02) revealed some relation between the change score in the NDI and Global Rating of Change (GROC). The SEM and MDC were calculated as 0.64 and 1.78 respectively. Conclusion The Greek version of the NDI measures disability in patients with neck pain in a reliable, valid and responsive manner. It is considered a useful tool for research and clinical settings in Greek Primary Health Care. PMID:18647393

  2. BEaST: brain extraction based on nonlocal segmentation technique.

    PubMed

    Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir; Manjón, José V; Leung, Kelvin K; Guizard, Nicolas; Wassef, Shafik N; Østergaard, Lasse Riis; Collins, D Louis

    2012-02-01

    Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Validity of endothelial cell analysis methods and recommendations for calibration in Topcon SP-2000P specular microscopy.

    PubMed

    van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M

    2005-07-01

    To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.

  4. Validity and reliability of the session-RPE method for quantifying training in Australian football: a comparison of the CR10 and CR100 scales.

    PubMed

    Scott, Tannath J; Black, Cameron R; Quinn, John; Coutts, Aaron J

    2013-01-01

    The purpose of this study was to examine and compare the criterion validity and test-retest reliability of the CR10 and CR100 rating of perceived exertion (RPE) scales for team sport athletes that undertake high-intensity, intermittent exercise. Twenty-one male Australian football (AF) players (age: 19.0 ± 1.8 years, body mass: 83.92 ± 7.88 kg) participated the first part (part A) of this study, which examined the construct validity of the session-RPE (sRPE) method for quantifying training load in AF. Ten male athletes (age: 16.1 ± 0.5 years) participated in the second part of the study (part B), which compared the test-retest reliability of the CR10 and CR100 RPE scales. In part A, the validity of the sRPE method was assessed by examining the relationships between sRPE, and objective measures of internal (i.e., heart rate) and external training load (i.e., distance traveled), collected from AF training sessions. Part B of the study assessed the reliability of sRPE through examining the test-retest reliability of sRPE during 3 different intensities of controlled intermittent running (10, 11.5, and 13 km·h(-1)). Results from part A demonstrated strong correlations for CR10- and CR100-derived sRPE with measures of internal training load (Banisters TRIMP and Edwards TRIMP) (CR10: r = 0.83 and 0.83, and CR100: r = 0.80 and 0.81, p < 0.05). Correlations between sRPE and external training load (distance, higher speed running and player load) for both the CR10 (r = 0.81, 0.71, and 0.83) and CR100 (r = 0.78, 0.69, and 0.80) were significant (p < 0.05). Results from part B demonstrated poor reliability for both the CR10 (31.9% CV) and CR100 (38.6% CV) RPE scales after short bouts of intermittent running. Collectively, these results suggest both CR10- and CR100-derived sRPE methods have good construct validity for assessing training load in AF. The poor levels of reliability revealed under field testing indicate that the sRPE method may not be sensible to detecting small changes in exercise intensity during brief intermittent running bouts. Despite this limitation, the sRPE remains a valid method to quantify training loads in high-intensity, intermittent team sport.

  5. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies.

    PubMed

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-11-01

    The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. A three-part longitudinal predictive validity study of selection into training for UK general practice. In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered.

  6. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  7. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  8. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  9. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  10. Biotransformation of lignan glycoside to its aglycone by Woodfordia fruticosa flowers: quantification of compounds using a validated HPTLC method.

    PubMed

    Mishra, Shikha; Aeri, Vidhu

    2017-12-01

    Saraca asoca Linn. (Caesalpiniaceae) is an important traditional remedy for gynaecological disorders and it contains lyoniside, an aryl tetralin lignan glycoside. The aglycone of lyoniside, lyoniresinol possesses structural similarity to enterolignan precursors which are established phytoestrogens. This work illustrates biotransformation of lyoniside to lyoniresinol using Woodfordia fruticosa Kurz. (Lythraceae) flowers and simultaneous quantification of lyoniside and lyoniresinol using a validated HPTLC method. The aqueous extract prepared from S. asoca bark was fermented using W. fruticosa flowers. The substrate and fermented product both were simultaneously analyzed using solvent system:toluene:ethyl acetate:formic acid (4:3:0.4) at 254 nm. The method was validated for specificity, accuracy, precision, linearity, sensitivity and robustness as per ICH guidelines. The substrate showed the presence of lyoniside, however, it decreased as the fermentation proceeded. On 3rd day, lyoniresinol starts appearing in the medium. In 8 days duration most of the lyoniside converted to lyoniresinol. The developed method was specific for lyoniside and lyoniresinol. Lyoniside and lyoniresinol showed linearity in the range of 250-3000 and 500-2500 ng. The method was accurate as resulted in 99.84% and 99.83% recovery, respectively, for lyoniside and lyoniresinol. Aryl tetralin lignan glycoside, lyoniside was successfully transformed into lyoniresinol using W. fruticosa flowers and their contents were simultaneously analyzed using developed validated HPTLC method.

  11. Validation of methods on formalin testing in tofu and determination of 3,5-diacetyl-dihydrolutidine stability by UV-Vis spectrophotometry

    NASA Astrophysics Data System (ADS)

    Rohyami, Yuli; Pribadi, Rizki Maulana

    2017-12-01

    Formalin is a food preservative that is prohibited by the government, but the abuse of these chemicals is still widely found. The presence of formalin can be detected by using a typical reagent that can ensure the presence of formaldehyde qualitatively and quantitatively. This research was conducted to validate the method of determining formalin in tofu by using Nash reagent in UV-Vis spectrophotometry. The addition of Nash reagent will lead to the formation of diacetyldihydrolutidin complex. The study was performed by stability test of deacetyldihydrolutidine complex against time and pH. Validation of methods for formalin testing in tofu with diacetyldihydrolutidine by UV-Vis spectrophotometry. The results showed that 3,5-diacetyl-dihydrolutidine complex is stable at pH of 7 and stable in the range of 70-120 minutes. The validation shows that the method gives good precision and accuracy of 83.78%. The method has the limit of detection of 1.3681 µg/mL, limit of quantification of 4,5603 µg/mL, and the estimated uncertainty of measurement of 1.30 µg/mL. The test showed that the tofu contained formalin 3.09 ± 1.30 µg/mL. These values provide information that this method can be used as a procedure for the determination of formalin on tofu.

  12. Development and Testing of a Method for Validating Chemical Inactivation of Ebola Virus.

    PubMed

    Alfson, Kendra J; Griffiths, Anthony

    2018-03-13

    Complete inactivation of infectious Ebola virus (EBOV) is required before a sample may be removed from a Biosafety Level 4 laboratory. The United States Federal Select Agent Program regulations require that procedures used to demonstrate chemical inactivation must be validated in-house to confirm complete inactivation. The objective of this study was to develop a method for validating chemical inactivation of EBOV and then demonstrate the effectiveness of several commonly-used inactivation methods. Samples containing infectious EBOV ( Zaire ebolavirus ) in different matrices were treated, and the sample was diluted to limit the cytopathic effect of the inactivant. The presence of infectious virus was determined by assessing the cytopathic effect in Vero E6 cells. Crucially, this method did not result in a loss of infectivity in control samples, and we were able to detect less than five infectious units of EBOV ( Zaire ebolavirus ). We found that TRIzol LS reagent and RNA-Bee inactivated EBOV in serum; TRIzol LS reagent inactivated EBOV in clarified cell culture media; TRIzol reagent inactivated EBOV in tissue and infected Vero E6 cells; 10% neutral buffered formalin inactivated EBOV in tissue; and osmium tetroxide vapors inactivated EBOV on transmission electron microscopy grids. The methods described herein are easily performed and can be adapted to validate inactivation of viruses in various matrices and by various chemical methods.

  13. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  14. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    PubMed

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Validation approach for a fast and simple targeted screening method for 75 antibiotics in meat and aquaculture products using LC-MS/MS.

    PubMed

    Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique

    2017-04-01

    An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.

  16. A Short Study on the Validity of Miller's Theorem Applied to Transistor Amplifier High-Frequency Performance

    ERIC Educational Resources Information Center

    Schubert, T. F., Jr.; Kim, E. M.

    2009-01-01

    The use of Miller's Theorem in the determination of the high-frequency cutoff frequency of transistor amplifiers was recently challenged by a paper published in this TRANSACTIONS. Unfortunately, that paper provided no simulation or experimental results to bring credence to the challenge or to validate the alternate method of determination…

  17. Spanish Adaptation and Validation of the Family Quality of Life Survey

    ERIC Educational Resources Information Center

    Verdugo, M. A.; Cordoba, L.; Gomez, J.

    2005-01-01

    Background: Assessing the quality of life (QOL) for families that include a person with a disability have recently become a major emphasis in cross-cultural QOL studies. The present study examined the reliability and validity of the Family Quality of Life Survey (FQOL) on a Spanish sample. Method and Results: The sample comprised 385 families who…

  18. The Impact of Model Parameterization and Estimation Methods on Tests of Measurement Invariance with Ordered Polytomous Data

    ERIC Educational Resources Information Center

    Koziol, Natalie A.; Bovaird, James A.

    2018-01-01

    Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…

  19. Development and Validation of an Instrument to Measure Indonesian Pre-Service Teachers' Conceptions of Statistics

    ERIC Educational Resources Information Center

    Idris, Khairiani; Yang, Kai-Lin

    2017-01-01

    This article reports the results of a mixed-methods approach to develop and validate an instrument to measure Indonesian pre-service teachers' conceptions of statistics. First, a phenomenographic study involving a sample of 44 participants uncovered six categories of conceptions of statistics. Second, an instrument of conceptions of statistics was…

  20. A New Method for Analyzing Content Validity Data Using Multidimensional Scaling

    ERIC Educational Resources Information Center

    Li, Xueming; Sireci, Stephen G.

    2013-01-01

    Validity evidence based on test content is of essential importance in educational testing. One source for such evidence is an alignment study, which helps evaluate the congruence between tested objectives and those specified in the curriculum. However, the results of an alignment study do not always sufficiently capture the degree to which a test…

  1. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  2. Validity of body composition methods across ethnic population groups.

    PubMed

    Deurenberg, P; Deurenberg-Yap, M

    2003-10-01

    Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.

  3. Validation of a partial coherence interferometry method for estimating retinal shape

    PubMed Central

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496

  4. Validation of a partial coherence interferometry method for estimating retinal shape.

    PubMed

    Verkicharla, Pavan K; Suheimat, Marwan; Pope, James M; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L; Atchison, David A

    2015-09-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data.

  5. Validation of an HPLC method for the quantification of ambroxol hydrochloride and benzoic acid in a syrup as pharmaceutical form stress test for stability evaluation.

    PubMed

    Heinänen, M; Barbas, C

    2001-03-01

    A method is described for ambroxol, trans-4-(2-amino-3,5-dibromobenzylamino) cyclohexanol hydrochloride, and benzoic acid separation by HPLC with UV detection at 247 nm in a syrup as pharmaceutical presentation. Optimal conditions were: Column Symmetry Shield RPC8, 5 microm 250 x 4.6 mm, and methanol/(H(3)PO(4) 8.5 mM/triethylamine pH=2.8) 40:60 v/v. Validation was performed using standards and the pharmaceutical preparation which contains the compounds described above. Results from both standards and samples show suitable validation parameters. The pharmaceutical grade substances were tested by factors that could influence the chemical stability. These reaction mixtures were analysed to evaluate the capability of the method to separate degradation products. Degradation products did not interfere with the determination of the substances tested by the assay.

  6. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1988-01-01

    This final report describes the results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems. This was approached by developing a translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could effect the output from a set of rules.

  7. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  8. Spectral signature verification using statistical analysis and text mining

    NASA Astrophysics Data System (ADS)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.

  9. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  10. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    PubMed

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study.

  11. Validation of a Salmonella loop-mediated isothermal amplification assay in animal food.

    PubMed

    Domesle, Kelly J; Yang, Qianru; Hammack, Thomas S; Ge, Beilei

    2018-01-02

    Loop-mediated isothermal amplification (LAMP) has emerged as a promising alternative to PCR for pathogen detection in food testing and clinical diagnostics. This study aimed to validate a Salmonella LAMP method run on both turbidimetry (LAMP I) and fluorescence (LAMP II) platforms in representative animal food commodities. The U.S. Food and Drug Administration (FDA)'s culture-based Bacteriological Analytical Manual (BAM) method was used as the reference method and a real-time quantitative PCR (qPCR) assay was also performed. The method comparison study followed the FDA's microbiological methods validation guidelines, which align well with those from the AOAC International and ISO. Both LAMP assays were 100% specific among 300 strains (247 Salmonella of 185 serovars and 53 non-Salmonella) tested. The detection limits ranged from 1.3 to 28 cells for six Salmonella strains of various serovars. Six commodities consisting of four animal feed items (cattle feed, chicken feed, horse feed, and swine feed) and two pet food items (dry cat food and dry dog food) all yielded satisfactory results. Compared to the BAM method, the relative levels of detection (RLODs) for LAMP I ranged from 0.317 to 1 with a combined value of 0.610, while those for LAMP II ranged from 0.394 to 1.152 with a combined value of 0.783, which all fell within the acceptability limit (2.5) for an unpaired study. This also suggests that LAMP was more sensitive than the BAM method at detecting low-level Salmonella contamination in animal food and results were available 3days sooner. The performance of LAMP on both platforms was comparable to that of qPCR but notably faster, particularly LAMP II. Given the importance of Salmonella in animal food safety, the LAMP assays validated in this study holds great promise as a rapid, reliable, and robust method for routine screening of Salmonella in these commodities. Published by Elsevier B.V.

  12. Promoting and inhibiting factors for the use of validated dietary assessment questionnaires in health check-up counseling: from occupational health nurses and dietitians' perspective.

    PubMed

    Katagiri, Ryoko; Muto, Go; Sasaki, Satoshi

    2018-06-01

    A validated questionnaire is not typically used for dietary assessment in health check-up counseling provided by occupational health nurses in Japan. We conducted a qualitative study to investigate the barriers and promoting factors affecting the use of validated questionnaires. Ten occupational health nurses and three registered dietitians, working at a health insurance society, were recruited for this study using an open-ended, free description questionnaire. Inhibiting factors, such as "Feeling of satisfaction with the current method," "Recognition of importance," and "Sense of burden from the questionnaire", and as promoting factors, "Feeling the current method is insufficient", "Recognition of importance," "Reduction in the feeling of burden after the answer," "Expectation of and reaction to the result," and "Expectation for the effect of the counseling" were noted. Since a standardized dietary assessment method in health counseling might be desirable for the harmonization of work with diseases prevention in an occupational field, findings in this study could propose appropriate targets to reduce confusion in health professionals' concerning the use of validated questionnaires.

  13. Determination of perfluorinated compounds in fish fillet homogenates: method validation and application to fillet homogenates from the Mississippi River.

    PubMed

    Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K

    2011-01-10

    We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  15. 77 FR 27135 - HACCP Systems Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-09

    .... Comments may be submitted by either of the following methods: Federal eRulemaking Portal: This Web site... those CCPs and the method of monitoring of them and provides certificates of analysis that specify the sampling method that the supplier uses and the results of that sampling. The receiving establishment should...

  16. The Development, Validation, and User Evaluation of Foodbook24: A Web-Based Dietary Assessment Tool Developed for the Irish Adult Population

    PubMed Central

    McNulty, Breige; Kehoe, Laura; Evans, Katie; Walton, Janette; Flynn, Albert

    2017-01-01

    Background The application of technology in the area of dietary assessment has resulted in the development of an array of tools, which are often specifically designed for a particular country or region. Objective The aim of this study was to describe the development, validation, and user evaluation of a Web-based dietary assessment tool “Foodbook24.” Methods Foodbook24 is a Web-based, dietary assessment tool consisting of a 24-hour dietary recall (24HDR) and food frequency questionnaire (FFQ) alongside supplementary questionnaires. Validity of the 24HDR component was assessed by 40 participants, who completed 3 nonconsecutive, self-administered 24HDR using Foodbook24 and a 4-day semi-weighed food diary at separate time points. Participants also provided fasted blood samples and 24-hour urine collections for the identification of biomarkers of nutrient and food group intake during each recording period. Statistical analyses on the nutrient and food group intake data derived from each method were performed in SPSS version 20.0 (SPSS Inc). Mean nutrient intakes (and standard deviations) recorded using each method of dietary assessment were calculated. Spearman and Pearson correlations, Wilcoxon Signed Rank and Paired t test were used to investigate the agreement and differences between the nutritional output from Foodbook24 (test method) and the 4-day semi-weighed food diary (reference method). Urinary and plasma biomarkers of nutrient intake were used as an objective validation of Foodbook24. To investigate the user acceptability of Foodbook24, participants from different studies involved with Foodbook24 were asked to complete an evaluation questionnaire. Results For nutrient intake, correlations between the dietary assessment methods were acceptable to very good in strength and statistically significant (range r=.32 to .75). There were some significant differences between reported mean intakes of micronutrients recorded by both methods; however, with the exception of protein (P=.03), there were no significant differences in the reporting of energy or macronutrient intake. Of the 19 food groups investigated in this analysis, there were significant differences between 6 food groups reported by both methods. Spearman correlations for biomarkers of nutrient and food group intake and reported intake were similar for both methods. A total of 118 participants evaluated the acceptability of Foodbook24. The tool was well-received and the majority, 67.8% (80/118), opted for Foodbook24 as the preferred method for future dietary intake assessment when compared against a traditional interviewer led recall and semi-weighed food diary. Conclusions The results of this study demonstrate the validity and user acceptability of Foodbook24. The results also highlight the potential of Foodbook24, a Web-based dietary assessment method, and present a viable alternative to nutritional surveillance in Ireland. PMID:28495662

  17. Detection of Mycoplasma hyopneumoniae by polymerase chain reaction in swine presenting respiratory problems

    PubMed Central

    Yamaguti, M.; Muller, E.E.; Piffer, A.I.; Kich, J.D.; Klein, C.S.; Kuchiishi, S.S.

    2008-01-01

    Since Mycoplasma hyopneumoniae isolation in appropriate media is a difficult task and impractical for daily routine diagnostics, Nested-PCR (N-PCR) techniques are currently used to improve the direct diagnostic sensitivity of Swine Enzootic Pneumonia. In a first experiment, this paper describes a N-PCR technique optimization based on three variables: different sampling sites, sample transport media, and DNA extraction methods, using eight pigs. Based on the optimization results, a second experiment was conducted for testing validity using 40 animals. In conclusion, the obtained results of the N-PCR optimization and validation allow us to recommend this test as a routine monitoring diagnostic method for Mycoplasma hyopneumoniae infection in swine herds. PMID:24031248

  18. Measuring engagement in nurses: the psychometric properties of the Persian version of Utrecht Work Engagement Scale

    PubMed Central

    Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza

    2017-01-01

    Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665

  19. Stereological Analysis of Liver Biopsy Histology Sections as a Reference Standard for Validating Non-Invasive Liver Fat Fraction Measurements by MRI

    PubMed Central

    St. Pierre, Tim G.; House, Michael J.; Bangma, Sander J.; Pang, Wenjie; Bathgate, Andrew; Gan, Eng K.; Ayonrinde, Oyekoya T.; Bhathal, Prithi S.; Clouston, Andrew; Olynyk, John K.; Adams, Leon A.

    2016-01-01

    Background and Aims Validation of non-invasive methods of liver fat quantification requires a reference standard. However, using standard histopathology assessment of liver biopsies is problematical because of poor repeatability. We aimed to assess a stereological method of measuring volumetric liver fat fraction (VLFF) in liver biopsies and to use the method to validate a magnetic resonance imaging method for measurement of VLFF. Methods VLFFs were measured in 59 subjects (1) by three independent analysts using a stereological point counting technique combined with the Delesse principle on liver biopsy histological sections and (2) by three independent analysts using the HepaFat-Scan® technique on magnetic resonance images of the liver. Bland Altman statistics and intraclass correlation (IC) were used to assess the repeatability of each method and the bias between the methods of liver fat fraction measurement. Results Inter-analyst repeatability coefficients for the stereology and HepaFat-Scan® methods were 8.2 (95% CI 7.7–8.8)% and 2.4 (95% CI 2.2–2.5)% VLFF respectively. IC coefficients were 0.86 (95% CI 0.69–0.93) and 0.990 (95% CI 0.985–0.994) respectively. Small biases (≤3.4%) were observable between two pairs of analysts using stereology while no significant biases were observable between any of the three pairs of analysts using HepaFat-Scan®. A bias of 1.4±0.5% VLFF was observed between the HepaFat-Scan® method and the stereological method. Conclusions Repeatability of the stereological method is superior to the previously reported performance of assessment of hepatic steatosis by histopathologists and is a suitable reference standard for validating non-invasive methods of measurement of VLFF. PMID:27501242

  20. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    PubMed Central

    Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra

    2018-01-01

    Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain. PMID:29843496

  1. Validating e-learning in continuing pharmacy education: user acceptance and knowledge change

    PubMed Central

    2014-01-01

    Background Continuing pharmacy education is becoming mandatory in most countries in order to keep the professional license valid. Increasing number of pharmacists are now using e-learning as part of their continuing education. Consequently, the increasing popularity of this method of education calls for standardization and validation practices. The conducted research explored validation aspects of e-learning in terms of knowledge increase and user acceptance. Methods Two e-courses were conducted as e-based continuing pharmacy education for graduated pharmacists. Knowledge increase and user acceptance were the two outcome measured. The change of knowledge in the first e-course was measured by a pre- and post-test and results analysed by the Wilcoxon signed–rank test. The acceptance of e-learning in the second e-course was investigated by a questionnaire and the results analysed using descriptive statistics. Results Results showed that knowledge increased significantly (p < 0.001) by 16 pp after participation in the first e-course. Among the participants who responded to the survey in the second course, 92% stated that e-courses were effective and 91% stated that they enjoyed the course. Conclusions The study shows that e-learning is a viable medium of conducting continuing pharmacy education; e-learning is effective in increasing knowledge and highly accepted by pharmacists from various working environments such as community and hospital pharmacies, faculties of pharmacy or wholesales. PMID:24528547

  2. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  3. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  4. Simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii using excitation-emission matrix fluorescence coupled with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju

    2018-02-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.

  5. Developing Enhanced Blood–Brain Barrier Permeability Models: Integrating External Bio-Assay Data in QSAR Modeling

    PubMed Central

    Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander

    2015-01-01

    Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462

  6. Development and validation of a headspace gas chromatographic method for the determination of residual solvents in arterolane (RBx11160) maleate bulk drug

    PubMed Central

    Gupta, Abhishek; Singh, Yogendra; Srinivas, Kona S.; Jain, Garima; Sreekumar, V. B.; Semwal, Vinod Prasad

    2010-01-01

    Objective: Arterolane maleate is an antimalarial drug currently under Phase III clinical evaluation, and presents a simple, economical and scalable synthesis, and does not suffer from safety problems. Arterolane maleate is more active than artemisinin; and is cheap to produce. It has a longer lifetime in the plasma, so it stays active longer in the body. To provide quality control over the manufacture of any API, it is essential to develop highly selective analytical methods. In the current article we are reporting the development and validation of a rapid and specific Head space gas chromatographic (HSGC) method for the determination of organic volatile impurities (residual solvents) in Arterolane Maleate bulk drug. Materials and Methods: The method development and its validation were performed on Perkin Elmer's gas chromatographic system equipped with Flame Ionization detector and head space analyzer. The method involved a thermal gradient elution of ten residual solvents present in arterolane maleate salt in RTx-624, 30 m × 0.32 mm, 1.8 μ column using nitrogen gas as a carrier. The flow rate was 0.5 ml/min and flame ionization detector (FID) was used. Results: During method validation, parameters such as precision, linearity, accuracy, limit of quantification and detection and specificity were evaluated, which remained within acceptable limits. Conclusions: The method has been successfully applied for the quantification of the amount of residual solvents present in arterolane maleate bulk drug.The method presents a simple and reliable solution for the routine quantitative analysis of residual solvents in Arterolane maleate bulk drug. PMID:21814428

  7. Development and validation of a stability-indicating capillary zone electrophoretic method for the assessment of entecavir and its correlation with liquid chromatographic methods.

    PubMed

    Dalmora, Sergio Luiz; Nogueira, Daniele Rubert; D'Avila, Felipe Bianchini; Souto, Ricardo Bizogne; Leal, Diogo Paim

    2011-01-01

    A stability-indicating capillary zone electrophoresis (CZE) method was validated for the analysis of entecavir in pharmaceutical formulations, using nimesulide as an internal standard. A fused-silica capillary (50 µm i.d.; effective length, 40 cm) was used while being maintained at 25°C; the applied voltage was 25 kV. A background electrolyte solution consisted of a 20 mM sodium tetraborate solution at pH 10. Injections were performed using a pressure mode at 50 mbar for 5 s, with detection at 216 nm. The specificity and stability-indicating capability were proven through forced degradation studies, evaluating also the in vitro cytotoxicity test of the degraded products. The method was linear over the concentration range of 1-200 µg mL(-1) (r(2) = 0.9999), and was applied for the analysis of entecavir in tablet dosage forms. The results were correlated to those of validated conventional and fast LC methods, showing non-significant differences (p > 0.05).

  8. Validation of a reversed-phase high-performance liquid chromatographic method for the determination of free amino acids in rice using l-theanine as the internal standard.

    PubMed

    Liyanaarachchi, G V V; Mahanama, K R R; Somasiri, H P P S; Punyasiri, P A N

    2018-02-01

    The study presents the validation results of the method carried out for analysis of free amino acids (FAAs) in rice using l-theanine as the internal standard (IS) with o-phthalaldehyde (OPA) reagent using high-performance liquid chromatography-fluorescence detection. The detection and quantification limits of the method were in the range 2-16μmol/kg and 3-19μmol/kg respectively. The method had a wide working range from 25 to 600μmol/kg for each individual amino acid, and good linearity with regression coefficients greater than 0.999. Precision measured in terms of repeatability and reproducibility, expressed as percentage relative standard deviation (% RSD) was below 9% for all the amino acids analyzed. The recoveries obtained after fortification at three concentration levels were in the range 75-105%. In comparison to l-norvaline, findings revealed that l-theanine is suitable as an IS and the validated method can be used for FAA determination in rice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Validated spectrofluorimetric method for the determination of carbamazepine in pharmaceutical dosage forms after reaction with 4-chloro-7--nitrobenzo-2-oxa-1,3-diazole (NBD-Cl).

    PubMed

    Walash, Mohammed I; El-Enany, Nahed; Askar, Hanany

    2015-11-01

    A sensitive and simple spectrofluorimetric method has been developed and validated for the determination of the anti-epileptic drug carbamazepine (CBZ) in its dosage forms. The method was based on a nucleophilic substitution reaction of CBZ with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) in borate buffer (pH 9) to form a highly fluorescent derivative that was measured at 530 nm after excitation at 460 nm. Factors affecting the formation of the reaction product were studied and optimized, and the reaction mechanism was postulated. The fluorescence-concentration plot is rectilinear over the range of 0.6-8 µg/mL with limit of detection of 0.06 µg/mL and limit of quantitation of 0.19 µg/mL. The method was applied to the analysis of commercial tablets and the results were in good agreement with those obtained using the reference method. Validation of the analytical procedures was evaluated according to ICH guidelines. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Multi-body modeling method for rollover using MADYMO

    NASA Astrophysics Data System (ADS)

    Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao

    2017-04-01

    Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.

  11. Validation of standard method EN ISO 11290-part 2 for the enumeration of Listeria monocytogenes in food.

    PubMed

    Rollier, Patricia; Lombard, Bertrand; Guillier, Laurent; François, Danièle; Romero, Karol; Pierru, Sylvie; Bouhier, Laurence; Gnanou Besse, Nathalie

    2018-05-01

    The reference method for the detection and enumeration of L. monocytogenes in food (Standards EN ISO 11290-1&2) have been validated by inter-laboratory studies in the frame of the Mandate M381 from European Commission to CEN. In this paper, the inter-laboratory studies led in 2013 on 5 matrices (cold-smoked salmon, milk powdered infant food formula, vegetables, environment, and cheese) to validate Standard EN ISO 11290-2 are reported. According to the results obtained, the method of the revised Standard EN ISO 11290-2 can be considered as a good method for the enumeration of L. monocytogenes in foods and food processing environment, in particular for the matrices included in the study. Values of repeatability and reproducibility standard deviations can be considered satisfactory for this type of method with a confirmation stage, since most of them were below 0.3 log 10 , also at low levels, close to the regulatory limit of 100 CFU/g. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. In Vitro Validation of Rapid MR Measurement of Wave Velocity

    PubMed

    Kraft; Fatouros; Corwin; Fei

    1997-05-01

    A one-dimensional time-of-flight MR sequence, having a total acquisition time of approximately 60 ms, has been employed to determine flow-wave propagation velocities for pulsatile flow in compliant latex tubes. The results were compared with those of two independent methods and were found to be in good agreement. An extension of the same MR method was used to test the validity of the "water-hammer" relationship as a means to assess pulse pressure. Very good agreement was found with direct manometric determinations of pulse pressure.

  13. Measurement of patient safety: a systematic review of the reliability and validity of adverse event detection with record review

    PubMed Central

    Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Vincent, Charles; van Gurp, Petra J; de Vet, Henrica C W; Wollersheim, Hub

    2016-01-01

    Objectives Record review is the most used method to quantify patient safety. We systematically reviewed the reliability and validity of adverse event detection with record review. Design A systematic review of the literature. Methods We searched PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Library and from their inception through February 2015. We included all studies that aimed to describe the reliability and/or validity of record review. Two reviewers conducted data extraction. We pooled κ values (κ) and analysed the differences in subgroups according to number of reviewers, reviewer experience and training level, adjusted for the prevalence of adverse events. Results In 25 studies, the psychometric data of the Global Trigger Tool (GTT) and the Harvard Medical Practice Study (HMPS) were reported and 24 studies were included for statistical pooling. The inter-rater reliability of the GTT and HMPS showed a pooled κ of 0.65 and 0.55, respectively. The inter-rater agreement was statistically significantly higher when the group of reviewers within a study consisted of a maximum five reviewers. We found no studies reporting on the validity of the GTT and HMPS. Conclusions The reliability of record review is moderate to substantial and improved when a small group of reviewers carried out record review. The validity of the record review method has never been evaluated, while clinical data registries, autopsy or direct observations of patient care are potential reference methods that can be used to test concurrent validity. PMID:27550650

  14. Screening for Posttraumatic Stress Disorder among Somali ex-combatants: A validation study

    PubMed Central

    Odenwald, Michael; Lingenfelder, Birke; Schauer, Maggie; Neuner, Frank; Rockstroh, Brigitte; Hinkel, Harald; Elbert, Thomas

    2007-01-01

    Background In Somalia, a large number of active and former combatants are affected by psychological problems such as Posttraumatic Stress Disorder (PTSD). This disorder impairs their ability to re-integrate into civilian life. However, many screening instruments for Posttraumatic Stress Disorder used in post-conflict settings have limited validity. Here we report on development and validation of a screening tool for PTSD in Somali language with a sample of ex-combatants. Methods We adapted the Posttraumatic Diagnostic Scale (PDS) to reflect linguistic and cultural differences within the Somali community so that local interviewers could be trained to administer the scale. For validation purposes, a randomly selected group of 135 Somali ex-combatants was screened by trained local interviewers; 64 of them were then re-assessed by trained clinical psychologists using the Composite International Diagnostic Interview (CIDI) and the Self-Report Questionnaire (SRQ-20). Results The screening instrument showed good internal consistency (Cronbach's α = .86), convergent validity with the CIDI (sensitivity = .90; specificity = .90) as well as concurrent validity: positive cases showed higher SRQ-20 scores, higher prevalence of psychotic symptoms, and higher levels of intake of the local stimulant drug khat. Compared to a single cut-off score, the multi-criteria scoring, in keeping with the DSM-IV, produced more diagnostic specificity. Conclusion The results provide evidence that our screening instrument is a reliable and valid method to detect PTSD among Somali ex-combatants. A future Disarmament, Demobilization and Reintegration Program in Somalia is recommended to screen for PTSD in order to identify ex-combatants with special psycho-social needs. PMID:17822562

  15. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods

    PubMed Central

    Dima, Alexandra Lelia; Schulz, Peter Johannes

    2017-01-01

    Background The eHealth Literacy Scale (eHEALS) is a tool to assess consumers’ comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. Objective The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Methods Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. Results CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. Conclusions The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers’ eHealth literacy. PMID:28400356

  16. A simple mass-conserved level set method for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  17. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  18. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  19. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion.

    PubMed

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie

    2017-09-01

    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  20. Smart stability-indicating spectrophotometric methods for determination of binary mixtures without prior separation.

    PubMed

    El-Bardicy, Mohammad G; Lotfy, Hayam M; El-Sayed, Mohammad A; El-Tarras, Mohammad F

    2008-01-01

    Ratio subtraction and isosbestic point methods are 2 innovating spectrophotometric methods used to determine vincamine in the presence of its acid degradation product and a mixture of cinnarizine (CN) and nicergoline (NIC). Linear correlations were obtained in the concentration range from 8-40 microg/mL for vincamine (I), 6-22 microg/mL for CN (II), and 6-36 microg/mL for NIC (III), with mean accuracies 99.72 +/- 0.917% for I, 99.91 +/- 0.703% for II, and 99.58 +/- 0.847 and 99.83 +/- 1.039% for III. The ratio subtraction method was utilized for the analysis of laboratory-prepared mixtures containing different ratios of vincamine and its degradation product, and it was valid in the presence of up to 80% degradation product. CN and NIC in synthetic mixtures were analyzed by the 2 proposed methods with the total content of the mixture determined at their respective isosbestic points of 270.2 and 235.8 nm, and the content of CN was determined by the ratio subtraction method. The proposed method was validated and found to be suitable as a stability-indicating assay method for vincamine in pharmaceutical formulations. The standard addition technique was applied to validate the results and to ensure the specificity of the proposed methods.

  1. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  2. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  3. Analytical method development and validation of spectrofluorimetric and spectrophotometric determination of some antimicrobial drugs in their pharmaceuticals.

    PubMed

    Ibrahim, F; Wahba, M E K; Magdy, G

    2018-01-05

    In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315nm and 403nm with linearity ranges of 5.0-140.0 and 10.0-100.0μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λ em 488nm or 491nm after excitation at λ ex 410nm with linearity ranges of 0.20-10.0 and 0.20-36.0μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λ em 335nm following excitation at λ ex 255nm with linearity range of 0.05 to 1.6μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytical method development and validation of spectrofluorimetric and spectrophotometric determination of some antimicrobial drugs in their pharmaceuticals

    NASA Astrophysics Data System (ADS)

    Ibrahim, F.; Wahba, M. E. K.; Magdy, G.

    2018-01-01

    In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315 nm and 403 nm with linearity ranges of 5.0-140.0 and 10.0-100.0 μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λem 488 nm or 491 nm after excitation at λex 410 nm with linearity ranges of 0.20-10.0 and 0.20-36.0 μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λem 335 nm following excitation at λex 255 nm with linearity range of 0.05 to 1.6 μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods.

  5. Transfer impedance measurements of the space shuttle Solid Rocket Motor (SRM) joints, wire meshes and a carbon graphite motor case

    NASA Technical Reports Server (NTRS)

    Papazian, Peter B.; Perala, Rodney A.; Curry, John D.; Lankford, Alan B.; Keller, J. David

    1988-01-01

    Using three different current injection methods and a simple voltage probe, transfer impedances for Solid Rocket Motor (SRM) joints, wire meshes, aluminum foil, Thorstrand and a graphite composite motor case were measured. In all cases, the surface current distribution for the particular current injection device was calculated analytically or by finite difference methods. The results of these calculations were used to generate a geometric factor which was the ratio of total injected current to surface current density. The results were validated in several ways. For wire mesh measurements, results showed good agreement with calculated results for a 14 by 18 Al screen. SRM joint impedances were independently verified. The filiment wound case measurement results were validated only to the extent that their curve shape agrees with the expected form of transfer impedance for a homogeneous slab excited by a plane wave source.

  6. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  7. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  8. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  9. Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs

    PubMed Central

    2013-01-01

    Background Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion. The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher’s exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Results Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Conclusions Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems. PMID:23566355

  10. The face of pain--a pilot study to validate the measurement of facial pain expression with an improved electromyogram method.

    PubMed

    Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus

    2005-01-01

    The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.

  11. A systematic review of the asymmetric inheritance of cellular organelles in eukaryotes: A critique of basic science validity and imprecision

    PubMed Central

    Collins, Anne; Ross, Janine

    2017-01-01

    We performed a systematic review to identify all original publications describing the asymmetric inheritance of cellular organelles in normal animal eukaryotic cells and to critique the validity and imprecision of the evidence. Searches were performed in Embase, MEDLINE and Pubmed up to November 2015. Screening of titles, abstracts and full papers was performed by two independent reviewers. Data extraction and validity were performed by one reviewer and checked by a second reviewer. Study quality was assessed using the SYRCLE risk of bias tool, for animal studies and by developing validity tools for the experimental model, organelle markers and imprecision. A narrative data synthesis was performed. We identified 31 studies (34 publications) of the asymmetric inheritance of organelles after mitotic or meiotic division. Studies for the asymmetric inheritance of centrosomes (n = 9); endosomes (n = 6), P granules (n = 4), the midbody (n = 3), mitochondria (n = 3), proteosomes (n = 2), spectrosomes (n = 2), cilia (n = 2) and endoplasmic reticulum (n = 2) were identified. Asymmetry was defined and quantified by variable methods. Assessment of the statistical reliability of the results indicated only two studies (7%) were judged to have low concern, the majority of studies (77%) were 'unclear' and five (16%) were judged to have 'high concerns'; the main reasons were low technical repeats (<10). Assessment of model validity indicated that the majority of studies (61%) were judged to be valid, ten studies (32%) were unclear and two studies (7%) were judged to have 'high concerns'; both described 'stem cells' without providing experimental evidence to confirm this (pluripotency and self-renewal). Assessment of marker validity indicated that no studies had low concern, most studies were unclear (96.5%), indicating there were insufficient details to judge if the markers were appropriate. One study had high concern for marker validity due to the contradictory results of two markers for the same organelle. For most studies the validity and imprecision of results could not be confirmed. In particular, data were limited due to a lack of reporting of interassay variability, sample size calculations, controls and functional validation of organelle markers. An evaluation of 16 systematic reviews containing cell assays found that only 50% reported adherence to PRISMA or ARRIVE reporting guidelines and 38% reported a formal risk of bias assessment. 44% of the reviews did not consider how relevant or valid the models were to the research question. 75% reviews did not consider how valid the markers were. 69% of reviews did not consider the impact of the statistical reliability of the results. Future systematic reviews in basic or preclinical research should ensure the rigorous reporting of the statistical reliability of the results in addition to the validity of the methods. Increased awareness of the importance of reporting guidelines and validation tools is needed for the scientific community. PMID:28562636

  12. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  13. Prospective, Multicenter Validation Study of Magnetic Resonance Volumetry for Response Assessment After Preoperative Chemoradiation in Rectal Cancer: Can the Results in the Literature be Reproduced?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martens, Milou H., E-mail: mh.martens@hotmail.com; Department of Surgery, Maastricht University Medical Center, Maastricht; GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht

    2015-12-01

    Purpose: To review the available literature on tumor size/volume measurements on magnetic resonance imaging for response assessment after chemoradiotherapy, and validate these cut-offs in an independent multicenter patient cohort. Methods and Materials: The study included 2 parts. (1) Review of the literature: articles were included that assessed the accuracy of tumor size/volume measurements on magnetic resonance imaging for tumor response assessment. Size/volume cut-offs were extracted; (2) Multicenter validation: extracted cut-offs from the literature were tested in a multicenter cohort (n=146). Accuracies were calculated and compared with reported results from the literature. Results: The review included 14 articles, in which 3more » different measurement methods were assessed: (1) tumor length; (2) 3-dimensonial tumor size; and (3) whole volume. Study outcomes consisted of (1) complete response (ypT0) versus residual tumor; (2) tumor regression grade 1 to 2 versus 3 to 5; and (3) T-downstaging (ypT« less

  14. Construct Validity and Scoring Methods of the World Health Organization: Health and Work Performance Questionnaire Among Workers With Arthritis and Rheumatological Conditions.

    PubMed

    AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J

    2017-06-01

    To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.

  15. Portuguese version of a stress and well-being evaluation tool (ASSET)at the workplace: validation of the psychometric properties

    PubMed Central

    Moreira, Sérgio; Carreiras, Joana; Cooper, Cary; Smeed, Matthew; Reis, Maria de Fátima; Pereira Miguel, José

    2018-01-01

    Objective The main objective of this work was to translate the English version of ASSET (A Shortened Stress Evaluation Tool) into the Portuguese version and to validate its psychometric properties. Additionally, this work tested the convergent validity of the instrument. Methods The translation and retroversion were conducted by experts and submitted to the authors for approval. Within an observational, cross-sectional study, regarding mental health at the workplace, ASSET together with other scales was applied to a sample of 405 participants. The psychometric validity of the subscales was studied using confirmatory factorial analysis. Results The factorial structure of ASSET is globally supported by the results, with the Perceptions of Your Job and Attitudes Towards your Organisation subscales requiring slight adjustments in the item structure and the Your Health subscales replicating the original structure. The convergent validity also supports the ASSET, showing that all subscales are significantly correlated with variables used to test convergence. Conclusions Globally, the results constitute an important contribution to ASSET and open the possibility of its usage among Portuguese-speaking countries. The results provide an evidence on the validity of the instrument and, in particular, of the mental and physical health subscales. PMID:29440211

  16. German Translation and Validation of the Cognitive Style Questionnaire Short Form (CSQ-SF-D)

    PubMed Central

    Huys, Quentin J. M.; Renz, Daniel; Petzschner, Frederike; Berwian, Isabel; Stoppel, Christian; Haker, Helene

    2016-01-01

    Background The Cognitive Style Questionnaire is a valuable tool for the assessment of hopeless cognitive styles in depression research, with predictive power in longitudinal studies. However, it is very burdensome to administer. Even the short form is still long, and neither this nor the original version exist in validated German translations. Methods The questionnaire was translated from English to German, back-translated and commented on by clinicians. The reliability, factor structure and external validity of an online form of the questionnaire were examined on 214 participants. External validity was measured on a subset of 90 subjects. Results The resulting CSQ-SF-D had good to excellent reliability, both across items and subscales, and similar external validity to the original English version. The internality subscale appeared less robust than other subscales. A detailed analysis of individual item performance suggests that stable results could be achieved with a very short form (CSQ-VSF-D) including only 27 of the 72 items. Conclusions The CSQ-SF-D is a validated and freely distributed translation of the CSQ-SF into German. This should make efficient assessment of cognitive style in German samples more accessible to researchers. PMID:26934499

  17. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, aftermore » eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.« less

  18. Determination of Aflatoxins and Ochratoxin A in Traditional Turkish Concentrated Fruit Juice Products by Multi-Immunoaffinity Column Cleanup and LC Fluorescence Detection: Single-Laboratory Validation.

    PubMed

    Kaymak, Tugrul; Türker, Levent; Tulay, Hüseyin; Stroka, Joerg

    2018-04-27

    Background : Pekmez and pestil are traditional Turkish foods made from concentrated grapejuice, which can be contaminated with mycotoxins such as aflatoxins and ochratoxin A (OTA). Objective : To carry out a single-laboratory validation of a method to simultaneously determine aflatoxins B 1 , B₂, G 1 , and G₂ and ochratoxin A in pekmez and pestil. Methods : The homogenized sample is extracted with methanol-water (80 + 20) using a high-speed blender. The (sample) extract is filtered, diluted with phosphate-buffered saline solution, and applied to a multi-immunoaffinity column (AFLAOCHRA PREP®). Aflatoxins and ochratoxin A are removed with (neat) methanol and then directly analyzed by reversed-phase LC with fluorescence detection using post-column bromination (Kobra cell®). Results : Test portions of blank pekmez and pestil were spiked with a mixture of aflatoxins and ochratoxin A to give levels ranging from 2.6 to 10.4 μg/kg and 1.0-4.0 μg/kg, respectively. Recoveries for total aflatoxins and ochratoxin A ranged from 84 to 106% and 80-97%, respectively, for spiked samples. Based on results for spiked pekmez and pestil (30 replicates each at three levels), the repeatability RSD ranged from 1.6 to 12% and 2.7-11% for total aflatoxins and ochratoxin A, respectively. Conclusions : The method performance in terms of recovery, repeatability, and detection limits has been demonstrated to be suitable for use as an Official Method. Highlights : First immunoaffinity column method validated for simultaneous analysis of aflatoxins and ochratoxin A in pekmez and pestil. Suitability for use for official purposes in Turkey, demonstrated by single-laboratory validation. Co-occurrence of aflatoxins and OTA in mulberry and carob pekmez reported for the first time.

  19. Validation of high throughput sequencing and microbial forensics applications

    PubMed Central

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security. PMID:25101166

  20. Validation of high throughput sequencing and microbial forensics applications.

    PubMed

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  1. Gamifying Self-Management of Chronic Illnesses: A Mixed-Methods Study.

    PubMed

    AlMarshedi, Alaa; Wills, Gary; Ranchhod, Ashok

    2016-09-09

    Self-management of chronic illnesses is an ongoing issue in health care research. Gamification is a concept that arose in the field of computer science and has been borrowed by many other disciplines. It is perceived by many that gamification can improve the self-management experience of people with chronic illnesses. This paper discusses the validation of a framework (called The Wheel of Sukr) that was introduced to achieve this goal. This research aims to (1) discuss a gamification framework targeting the self-management of chronic illnesses and (2) validate the framework by diabetic patients, medical professionals, and game experts. A mixed-method approach was used to validate the framework. Expert interviews (N=8) were conducted in order to validate the themes of the framework. Additionally, diabetic participants completed a questionnaire (N=42) in order to measure their attitudes toward the themes of the framework. The results provide a validation of the framework. This indicates that gamification might improve the self-management of chronic illnesses, such as diabetes. Namely, the eight themes in the Wheel of Sukr (fun, esteem, socializing, self-management, self-representation, motivation, growth, sustainability) were perceived positively by 71% (30/42) of the participants with P value <.001. In this research, both the interviews and the questionnaire yielded positive results that validate the framework (The Wheel of Sukr). Generally, this study indicates an overall acceptance of the notion of gamification in the self-management of diabetes.

  2. Spanish validation of Bad Sobernheim Stress Questionnaire (BSSQ (brace).es) for adolescents with braces

    PubMed Central

    2010-01-01

    Background As a result of scientific and medical professionals gaining interest in Stress and Health Related Quality of Life (HRQL), the aim of our research is, thus, to validate into Spanish the German questionnaire Bad Sobernheim Stress Questionnaire (BSSQ) (mit Korsett), for adolescents wearing braces. Methods The methodology used adheres to literature on trans-cultural adaptation by doing a translation and a back translation; it involved 35 adolescents, ages ranging between 10 and 16, with Adolescent Idiopathic Scoliosis (AIS) and wearing the same kind of brace (Rigo System Chêneau Brace). The materials used were a socio-demographics data questionnaire, the SRS-22 and the Spanish version of BSSQ(brace).es. The statistical analysis calculated the reliability (test-retest reliability and internal consistency) and the validity (convergent and construct validity) of the BSSQ (brace).es. Results BSSQ(brace).es is reliable because of its satisfactory internal consistency (Cronbach's alpha coefficient was 0.809, p < 0.001) and temporal stability (test-retest method with a Pearson correlation coefficient of 0.902 (p < 0.01)). It demonstrated convergent validity with SRS-22 since the Pearson correlation coefficient was 0.656 (p < 0.01). By undertaking an Exploratory Principal Components Analysis, a latent structure was found based on two Components which explicate the variance at 60.8%. Conclusions BSSQ (brace).es is reliable and valid and can be used with Spanish adolescents to assess the stress level caused by the brace. PMID:20633253

  3. The development of thematic materials using project based learning for elementary school

    NASA Astrophysics Data System (ADS)

    Yuliana, M.; Wiryawan, S. A.; Riyadi

    2018-05-01

    Teaching materials is one of the important factors in supporting on learning process. This paper discussed about developing thematic materials using project based learning. Thematic materials are designed to make students to be active, creative, cooperative, easy in thinking to solve the problem. The purpose of the research was to develop thematic material using project based learning which used valid variables. The method of research which used in this research was four stages of research and development proposed by Thiagarajan consisting of 4 stages, namely: (1) definition stage, (2) design stage, (3) development stage, and (4) stage of dissemination. The first stage was research and information collection, it was in form of need analysis with questionnaire, observation, interview, and document analysis. Design stage was based on the competencies and indicator. The third was development stage, this stage was used to product validation from expert. The validity of research development involved media validator, material validator, and linguistic validator. The result from the validation of thematic material by expert showed that the overall result had a very good rating which ranged from 1 to 5 likert scale, media validation showed a mean score 4,83, the material validation showed mean score 4,68, and the mean of linguistic validation was e 4,74. It showed that the thematic material using project based learning was valid and feasible to be implemented in the context thematic learning.

  4. Validated spectroscopic methods for determination of anti-histaminic drug azelastine in pure form: Analytical application for quality control of its pharmaceutical preparations.

    PubMed

    El-Masry, Amal A; Hammouda, Mohammed E A; El-Wasseef, Dalia R; El-Ashry, Saadia M

    2018-02-15

    Two simple, sensitive, rapid, validated and cost effective spectroscopic methods were established for quantification of antihistaminic drug azelastine (AZL) in bulk powder as well as in pharmaceutical dosage forms. In the first method (A) the absorbance difference between acidic and basic solutions was measured at 228nm, whereas in the second investigated method (B) the binary complex formed between AZL and Eosin Y in acetate buffer solution (pH3) was measured at 550nm. Different criteria that have critical influence on the intensity of absorption were deeply studied and optimized so as to achieve the highest absorption. The proposed methods obeyed Beer ' s low in the concentration range of (2.0-20.0μg·mL -1 ) and (0.5-15.0μg·mL -1 ) with % recovery±S.D. of (99.84±0.87), (100.02±0.78) for methods (A) and (B), respectively. Furthermore, the proposed methods were easily applied for quality control of pharmaceutical preparations without any conflict with its co-formulated additives, and the analytical results were compatible with those obtained by the comparison one with no significant difference as insured by student's t-test and the variance ratio F-test. Validation of the proposed methods was performed according the ICH guidelines in terms of linearity, limit of quantification, limit of detection, accuracy, precision and specificity, where the analytical results were persuasive. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Validated selective spectrophotometric methods for the kinetic determination of desloratidine in tablets and in the presence of its parent drug.

    PubMed

    Derayea, S M Sayed

    2014-11-01

    Two novel selective validated methods have been developed for analysis of desloratidine (DSL) in its tablets formulation. Both were kinetic spectrophotometric methods, depend on the interaction of the secondary amino group in DSL with acetaldehyde to give N-vinylpiperidyl product. The formed N-vinylpiperidyl compound was reacted with 2,3,5,6-tetrachloro-1,4-benzoquinone (chloranil) to form colored N-vinylpiperidyl-substituted benzoquinone derivatives. The formed blue-colored derivative was measured at 672 nm. The reaction conditions were carefully studied and all factors were optimized. The molar ratio between the reactants was estimated and a suggested reaction mechanism was presented. The analysis was carried out using initial rate and fixed time (at 6 min) methods. The linear concentration ranges were 3-50 and 10 - 60 μg mL-1 with limits of detection of 3.2 and 2.2 μg mL-1 for the initial rate and fixed time methods, respectively. ICH guidelines were applied for analytical performance validation of the proposed methods. The presence of common excipients in the pharmaceutical formulation did not produce any significant interference, as well as from loratadine, which is the parent compound of DSL. Different commercially available tablets formulations containing were successfully analyzed, with, the percentage recovery ranging from 97.28-100.90 ± 0.7 2-1.41%. The obtained results were compared statistically with the reported method results. The proposed methods have similar accuracy and precision as the reported as indicated from the F- and t-test data.

  6. Spectrophotometric and chemometric methods for determination of imipenem, ciprofloxacin hydrochloride, dexamethasone sodium phosphate, paracetamol and cilastatin sodium in human urine

    NASA Astrophysics Data System (ADS)

    El-Kosasy, A. M.; Abdel-Aziz, Omar; Magdy, N.; El Zahar, N. M.

    2016-03-01

    New accurate, sensitive and selective spectrophotometric and chemometric methods were developed and subsequently validated for determination of Imipenem (IMP), ciprofloxacin hydrochloride (CIPRO), dexamethasone sodium phosphate (DEX), paracetamol (PAR) and cilastatin sodium (CIL) in human urine. These methods include a new derivative ratio method, namely extended derivative ratio (EDR), principal component regression (PCR) and partial least-squares (PLS) methods. A novel EDR method was developed for the determination of these drugs, where each component in the mixture was determined by using a mixture of the other four components as divisor. Peak amplitudes were recorded at 293.0 nm, 284.0 nm, 276.0 nm, 257.0 nm and 221.0 nm within linear concentration ranges 3.00-45.00, 1.00-15.00, 4.00-40.00, 1.50-25.00 and 4.00-50.00 μg mL- 1 for IMP, CIPRO, DEX, PAR and CIL, respectively. PCR and PLS-2 models were established for simultaneous determination of the studied drugs in the range of 3.00-15.00, 1.00-13.00, 4.00-12.00, 1.50-9.50, and 4.00-12.00 μg mL- 1 for IMP, CIPRO, DEX, PAR and CIL, respectively, by using eighteen mixtures as calibration set and seven mixtures as validation set. The suggested methods were validated according to the International Conference of Harmonization (ICH) guidelines and the results revealed that they were accurate, precise and reproducible. The obtained results were statistically compared with those of the published methods and there was no significant difference.

  7. The Validity and Reliability Test of the Indonesian Version of Gastroesophageal Reflux Disease Quality of Life (GERD-QOL) Questionnaire.

    PubMed

    Siahaan, Laura A; Syam, Ari F; Simadibrata, Marcellus; Setiati, Siti

    2017-01-01

    to obtain a valid and reliable GERD-QOL questionnaire for Indonesian application. at the initial stage, the GERD-QOL questionnaire was first translated into Indonesian language and the translated questionnaire was subsequently translated back into the original language (back-to-back translation). The results were evaluated by the researcher team and therefore, an Indonesian version of GERD-QOL questionnaire was developed. Ninety-one patients who had been clinically diagnosed with GERD based on the Montreal criteria were interviewed using the Indonesian version of GERD-QOL questionnaire and the SF 36 questionnaire. The validity was evaluated using a method of construct validity and external validity, and reliability can be tested by the method of internal consistency and test retest. the Indonesian version of GERD-QOL questionnaire had a good internal consistency reliability with a Cronbach Alpha of 0.687-0.842 and a good test retest reliability with an intra-class correlation coefficient of 0.756-0.936; p<0.05). The questionnaire had also been demonstrated to have a good validity with a proven high correlation to each question of SF-36 (p<0.05). the Indonesian version of GERD-QOL questionnaire has been proven valid and reliable to evaluate the quality of life of GERD patients.

  8. A pan-European ring trial to validate an International Standard for detection of Vibrio cholerae, Vibrio parahaemolyticus and Vibrio vulnificus in seafoods.

    PubMed

    Hartnell, R E; Stockley, L; Keay, W; Rosec, J-P; Hervio-Heath, D; Van den Berg, H; Leoni, F; Ottaviani, D; Henigman, U; Denayer, S; Serbruyns, B; Georgsson, F; Krumova-Valcheva, G; Gyurova, E; Blanco, C; Copin, S; Strauch, E; Wieczorek, K; Lopatek, M; Britova, A; Hardouin, G; Lombard, B; In't Veld, P; Leclercq, A; Baker-Austin, C

    2018-02-10

    Globally, vibrios represent an important and well-established group of bacterial foodborne pathogens. The European Commission (EC) mandated the Comite de European Normalisation (CEN) to undertake work to provide validation data for 15 methods in microbiology to support EC legislation. As part of this mandated work programme, merging of ISO/TS 21872-1:2007, which specifies a horizontal method for the detection of V. parahaemolyticus and V. cholerae, and ISO/TS 21872-2:2007, a similar horizontal method for the detection of potentially pathogenic vibrios other than V. cholerae and V. parahaemolyticus was proposed. Both parts of ISO/TS 21872 utilized classical culture-based isolation techniques coupled with biochemical confirmation steps. The work also considered simplification of the biochemical confirmation steps. In addition, because of advances in molecular based methods for identification of human pathogenic Vibrio spp. classical and real-time PCR options were also included within the scope of the validation. These considerations formed the basis of a multi-laboratory validation study with the aim of improving the precision of this ISO technical specification and providing a single ISO standard method to enable detection of these important foodborne Vibrio spp.. To achieve this aim, an international validation study involving 13 laboratories from 9 countries in Europe was conducted in 2013. The results of this validation have enabled integration of the two existing technical specifications targeting the detection of the major foodborne Vibrio spp., simplification of the suite of recommended biochemical identification tests and the introduction of molecular procedures that provide both species level identification and discrimination of putatively pathogenic strains of V. parahaemolyticus by the determination of the presence of theromostable direct and direct related haemolysins. The method performance characteristics generated in this have been included in revised international standard, ISO 21872:2017, published in July 2017. Copyright © 2018. Published by Elsevier B.V.

  9. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.

  10. Determination of glomerular filtration rate (GFR) from fractional renal accumulation of iodinated contrast material: a convenient and rapid single-kidney CT-GFR technique.

    PubMed

    Yuan, XiaoDong; Tang, Wei; Shi, WenWei; Yu, Libao; Zhang, Jing; Yuan, Qing; You, Shan; Wu, Ning; Ao, Guokun; Ma, Tingting

    2018-07-01

    To develop a convenient and rapid single-kidney CT-GFR technique. One hundred and twelve patients referred for multiphasic renal CT and 99mTc-DTPA renal dynamic imaging Gates-GFR measurement were prospectively included and randomly divided into two groups of 56 patients each: the training group and the validation group. On the basis of the nephrographic phase images, the fractional renal accumulation (FRA) was calculated and correlated with the Gates-GFR in the training group. From this correlation a formula was derived for single-kidney CT-GFR calculation, which was validated by a paired t test and linear regression analysis with the single-kidney Gates-GFR in the validation group. In the training group, the FRA (x-axis) correlated well (r = 0.95, p < 0.001) with single-kidney Gates-GFR (y-axis), producing a regression equation of y = 1665x + 1.5 for single-kidney CT-GFR calculation. In the validation group, the difference between the methods of single-kidney GFR measurements was 0.38 ± 5.57 mL/min (p = 0.471); the regression line is identical to the diagonal (intercept = 0 and slope = 1) (p = 0.727 and p = 0.473, respectively), with a standard deviation of residuals of 5.56 mL/min. A convenient and rapid single-kidney CT-GFR technique was presented and validated in this investigation. • The new CT-GFR method takes about 2.5 min of patient time. • The CT-GFR method demonstrated identical results to the Gates-GFR method. • The CT-GFR method is based on the fractional renal accumulation of iodinated CM. • The CT-GFR method is achieved without additional radiation dose to the patient.

  11. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  12. 76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 63 [OAR-2004-0080, FRL-9306-8] RIN 2060-AF00 Method 301--Field Validation of Pollutant Measurement Methods From Various Waste Media AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: This action amends EPA's Method 301, Field Validation...

  13. A catch-up validation study of an in vitro skin irritation test method using reconstructed human epidermis LabCyte EPI-MODEL24.

    PubMed

    Kojima, Hajime; Katoh, Masakazu; Shinoda, Shinsuke; Hagiwara, Saori; Suzuki, Tamie; Izumi, Runa; Yamaguchi, Yoshihiro; Nakamura, Maki; Kasahawa, Toshihiko; Shibai, Aya

    2014-07-01

    Three validation studies were conducted by the Japanese Society for Alternatives to Animal Experiments in order to assess the performance of a skin irritation assay using reconstructed human epidermis (RhE) LabCyte EPI-MODEL24 (LabCyte EPI-MODEL24 SIT) developed by the Japan Tissue Engineering Co., Ltd. (J-TEC), and the results of these studies were submitted to the Organisation for Economic Co-operation and Development (OECD) for the creation of a Test Guideline (TG). In the summary review report from the OECD, the peer review panel indicated the need to resolve an issue regarding the misclassification of 1-bromohexane. To this end, a rinsing operation intended to remove exposed chemicals was reviewed and the standard operating procedure (SOP) revised by J-TEC. Thereafter, in order to confirm general versatility of the revised SOP, a new validation management team was organized by the Japanese Center for the Validation of Alternative Methods (JaCVAM) to undertake a catch-up validation study that would compare the revised assay with similar in vitro skin irritation assays, per OECD TG No. 439 (2010). The catch-up validation and supplementary studies for LabCyte EPI-MODEL24 SIT using the revised SOPs were conducted at three laboratories. These results showed that the revised SOP of LabCyte EPI-MODEL24 SIT conformed more accurately to the classifications for skin irritation under the United Nations Globally Harmonised System of Classification and Labelling of Chemicals (UN GHS), thereby highlighting the importance of an optimized rinsing operation for the removal of exposed chemicals in obtaining consistent results from in vitro skin irritation assays. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  15. Effect of blood sampling schedule and method of calculating the area under the curve on validity and precision of glycaemic index values.

    PubMed

    Wolever, Thomas M S

    2004-02-01

    To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.

  16. Percent body fat estimations in college men using field and laboratory methods: A three-compartment model approach

    PubMed Central

    Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2008-01-01

    Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Methods Thirty-one Caucasian men (22.5 ± 2.7 yrs; 175.6 ± 6.3 cm; 76.4 ± 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex® 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Conclusion Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population. PMID:18426582

  17. Determination of Free and Total Carnitine and Choline in Infant Formulas and Adult Nutritional Products by UPLC/MS/MS: Single-Laboratory Validation, First Action 2014.04.

    PubMed

    Jing, Wei; Thompson, Joseph J; Jacobs, Wesley A; Salvati, Louis M

    2015-01-01

    A single-laboratory validation (SLV) has been performed for a method that simultaneously determines choline and carnitine in nutritional products by ultra performance LC (UPLC)/MS/MS. All 11 matrixes from the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals (SPIFAN) were tested. Depending on the sample preparation, either the added (free, with a water dilution and filtering) or total (after microwave digestion at 120°C in nitric acid and subsequent neutralization with ammonia) species can be detected. For nonmilk containing products, the total carnitine is almost always equal to the free carnitine. A substantial difference was noted between free and total choline in all products. All Standard Method Performance Requirements for carnitine and choline have been met. This report summarizes the material sent to the AOAC Expert Review Panel for SPIFAN nutrient methods for the review of this method, as well as some additional data from an internal validation. The method was granted AOAC First Action status for carnitine in 2014 (2014.04), but the choline data are also being presented. A comparison of choline results to those from other AOAC methods is given.

  18. The Arthroscopic Surgical Skill Evaluation Tool (ASSET)

    PubMed Central

    Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.

    2014-01-01

    Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808

  19. Tissue Preservation Assessment Preliminary Results

    NASA Technical Reports Server (NTRS)

    Globus, Ruth; Costes, Sylvain

    2017-01-01

    Pre-flight groundbased testing done to prepare for the first Rodent Research mission validation flight, RR1 (Choi et al, 2016 PlosOne). We purified RNA and measured RIN values to assess quality of the samples. For protein, we measured liver enzyme activities. We tested protocol and methods of preservation to date. Here we present an overview of results related to tissue preservation from the RR1 validation mission and a summary of findings to date from investigators who received RR1 teissues various Biospecimen Sharing Program.

  20. Evaluation of Fission Product Critical Experiments and Associated Biases for Burnup Credit Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T; Reed, Davis Allan

    2010-01-01

    One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less

Top