Sample records for method validation demonstrated

  1. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  2. A Design to Improve Internal Validity of Assessments of Teaching Demonstrations

    ERIC Educational Resources Information Center

    Bartsch, Robert A.; Engelhardt Bittner, Wendy M.; Moreno, Jesse E., Jr.

    2008-01-01

    Internal validity is important in assessing teaching demonstrations both for one's knowledge and for quality assessment demanded by outside sources. We describe a method to improve the internal validity of assessments of teaching demonstrations: a 1-group pretest-posttest design with alternative forms. This design is often more practical and…

  3. In-vitro Equilibrium Phosphate Binding Study of Sevelamer Carbonate by UV-Vis Spectrophotometry.

    PubMed

    Prasaja, Budi; Syabani, M Maulana; Sari, Endah; Chilmi, Uci; Cahyaningsih, Prawitasari; Kosasih, Theresia Weliana

    2018-06-12

    Sevelamer carbonate is a cross-linked polymeric amine; it is the active ingredient in Renvela ® tablets. US FDA provides recommendation for demonstrating bioequivalence for the development of a generic product of sevelamer carbonte using in-vitro equilibrium binding study. A simple UV-vis spectrophotometry method was developed and validated for quantification of free phosphate to determine the binding parameter constant of sevelamer. The method validation demonstrated the specificity, limit of quantification, accuracy and precision of measurements. The validated method has been successfully used to analyze samples in in-vitro equilibrium binding study for demonstrating bioequivalence. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Validation in Support of Internationally Harmonised OECD Test Guidelines for Assessing the Safety of Chemicals.

    PubMed

    Gourmelon, Anne; Delrue, Nathalie

    Ten years elapsed since the OECD published the Guidance document on the validation and international regulatory acceptance of test methods for hazard assessment. Much experience has been gained since then in validation centres, in countries and at the OECD on a variety of test methods that were subjected to validation studies. This chapter reviews validation principles and highlights common features that appear to be important for further regulatory acceptance across studies. Existing OECD-agreed validation principles will most likely generally remain relevant and applicable to address challenges associated with the validation of future test methods. Some adaptations may be needed to take into account the level of technique introduced in test systems, but demonstration of relevance and reliability will continue to play a central role as pre-requisite for the regulatory acceptance. Demonstration of relevance will become more challenging for test methods that form part of a set of predictive tools and methods, and that do not stand alone. OECD is keen on ensuring that while these concepts evolve, countries can continue to rely on valid methods and harmonised approaches for an efficient testing and assessment of chemicals.

  5. Demonstrating Experimenter "Ineptitude" as a Means of Teaching Internal and External Validity

    ERIC Educational Resources Information Center

    Treadwell, Kimberli R.H.

    2008-01-01

    Internal and external validity are key concepts in understanding the scientific method and fostering critical thinking. This article describes a class demonstration of a "botched" experiment to teach validity to undergraduates. Psychology students (N = 75) completed assessments at the beginning of the semester, prior to and immediately following…

  6. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry

    PubMed Central

    Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D

    2015-01-01

    Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925

  8. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  9. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  10. Developmental validation of a Nextera XT mitogenome Illumina MiSeq sequencing method for high-quality samples.

    PubMed

    Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla

    2018-05-01

    Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study.

    PubMed

    Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W

    2015-01-01

    CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.

  12. The LEAP™ Gesture Interface Device and Take-Home Laparoscopic Simulators: A Study of Construct and Concurrent Validity.

    PubMed

    Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A

    2016-02-01

    To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.

  13. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  14. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  15. Simulation-based training for prostate surgery.

    PubMed

    Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-10-01

    To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  16. Bioanalytical method development and validation for the determination of glycine in human cerebrospinal fluid by ion-pair reversed-phase liquid chromatography-tandem mass spectrometry.

    PubMed

    Jiang, Jian; James, Christopher A; Wong, Philip

    2016-09-05

    A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The selection of air traffic control specialists : two studies demonstrating methods to insure an accurate validity coefficient for selection devices.

    DOT National Transportation Integrated Search

    1979-03-01

    There are several conditions that can influence the calculation of the statistical validity of a test battery such as that used to selected Air Traffic Control Specialists. Two conditions of prime importance to statistical validity are recruitment pr...

  18. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  19. Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.

    The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less

  20. Assessing Risk for Sexual Offenders in New Zealand: Development and Validation of a Computer-Scored Risk Measure

    ERIC Educational Resources Information Center

    Skelton, Alexander; Riley, David; Wales, David; Vess, James

    2006-01-01

    A growing research base supports the predictive validity of actuarial methods of risk assessment with sexual offenders. These methods use clearly defined variables with demonstrated empirical association with re-offending. The advantages of actuarial measures for screening large numbers of offenders quickly and economically are further enhanced…

  1. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  2. Cleaning verification: A five parameter study of a Total Organic Carbon method development and validation for the cleaning assessment of residual detergents in manufacturing equipment.

    PubMed

    Li, Xue; Ahmad, Imad A Haidar; Tam, James; Wang, Yan; Dao, Gina; Blasko, Andrei

    2018-02-05

    A Total Organic Carbon (TOC) based analytical method to quantitate trace residues of clean-in-place (CIP) detergents CIP100 ® and CIP200 ® on the surfaces of pharmaceutical manufacturing equipment was developed and validated. Five factors affecting the development and validation of the method were identified: diluent composition, diluent volume, extraction method, location for TOC sample preparation, and oxidant flow rate. Key experimental parameters were optimized to minimize contamination and to improve the sensitivity, recovery, and reliability of the method. The optimized concentration of the phosphoric acid in the swabbing solution was 0.05M, and the optimal volume of the sample solution was 30mL. The swab extraction method was 1min sonication. The use of a clean room, as compared to an isolated lab environment, was not required for method validation. The method was demonstrated to be linear with a correlation coefficient (R) of 0.9999. The average recoveries from stainless steel surfaces at multiple spike levels were >90%. The repeatability and intermediate precision results were ≤5% across the 2.2-6.6ppm range (50-150% of the target maximum carry over, MACO, limit). The method was also shown to be sensitive with a detection limit (DL) of 38ppb and a quantitation limit (QL) of 114ppb. The method validation demonstrated that the developed method is suitable for its intended use. The methodology developed in this study is generally applicable to the cleaning verification of any organic detergents used for the cleaning of pharmaceutical manufacturing equipment made of electropolished stainless steel material. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Validation of a HLA-A2 tetramer flow cytometric method, IFNgamma real time RT-PCR, and IFNgamma ELISPOT for detection of immunologic response to gp100 and MelanA/MART-1 in melanoma patients

    PubMed Central

    Xu, Yuanxin; Theobald, Valerie; Sung, Crystal; DePalma, Kathleen; Atwater, Laura; Seiger, Keirsten; Perricone, Michael A; Richards, Susan M

    2008-01-01

    Background HLA-A2 tetramer flow cytometry, IFNγ real time RT-PCR and IFNγ ELISPOT assays are commonly used as surrogate immunological endpoints for cancer immunotherapy. While these are often used as research assays to assess patient's immunologic response, assay validation is necessary to ensure reliable and reproducible results and enable more accurate data interpretation. Here we describe a rigorous validation approach for each of these assays prior to their use for clinical sample analysis. Methods Standard operating procedures for each assay were established. HLA-A2 (A*0201) tetramer assay specific for gp100209(210M) and MART-126–35(27L), IFNγ real time RT-PCR and ELISPOT methods were validated using tumor infiltrating lymphocyte cell lines (TIL) isolated from HLA-A2 melanoma patients. TIL cells, specific for gp100 (TIL 1520) or MART-1 (TIL 1143 and TIL1235), were used alone or spiked into cryopreserved HLA-A2 PBMC from healthy subjects. TIL/PBMC were stimulated with peptides (gp100209, gp100pool, MART-127–35, or influenza-M1 and negative control peptide HIV) to further assess assay performance characteristics for real time RT-PCR and ELISPOT methods. Validation parameters included specificity, accuracy, precision, linearity of dilution, limit of detection (LOD) and limit of quantification (LOQ). In addition, distribution was established in normal HLA-A2 PBMC samples. Reference ranges for assay controls were established. Results The validation process demonstrated that the HLA-A2 tetramer, IFNγ real time RT-PCR, and IFNγ ELISPOT were highly specific for each antigen, with minimal cross-reactivity between gp100 and MelanA/MART-1. The assays were sensitive; detection could be achieved at as few as 1/4545–1/6667 cells by tetramer analysis, 1/50,000 cells by real time RT-PCR, and 1/10,000–1/20,000 by ELISPOT. The assays met criteria for precision with %CV < 20% (except ELISPOT using high PBMC numbers with %CV < 25%) although flow cytometric assays and cell based functional assays are known to have high assay variability. Most importantly, assays were demonstrated to be effective for their intended use. A positive IFNγ response (by RT-PCR and ELISPOT) to gp100 was demonstrated in PBMC from 3 melanoma patients. Another patient showed a positive MART-1 response measured by all 3 validated methods. Conclusion Our results demonstrated the tetramer flow cytometry assay, IFNγ real-time RT-PCR, and INFγ ELISPOT met validation criteria. Validation approaches provide a guide for others in the field to validate these and other similar assays for assessment of patient T cell response. These methods can be applied not only to cancer vaccines but to other therapeutic proteins as part of immunogenicity and safety analyses. PMID:18945350

  4. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  5. A practical approach for linearity assessment of calibration curves under the International Union of Pure and Applied Chemistry (IUPAC) guidelines for an in-house validation of method of analysis.

    PubMed

    Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu

    2010-01-01

    Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC.

  6. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    PubMed Central

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155

  7. Learning to recognize rat social behavior: Novel dataset and cross-dataset application.

    PubMed

    Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C

    2018-04-15

    Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  9. Objective assessment based on motion-related metrics and technical performance in laparoscopic suturing.

    PubMed

    Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J

    2017-02-01

    The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing. However, this method does not take into account the quality of the suture. Thus, future works will focus on developing new methods combining motion analysis and qualitative outcome evaluation to provide a complete performance assessment to trainees.

  10. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  11. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  12. Development and Testing of a Method for Validating Chemical Inactivation of Ebola Virus.

    PubMed

    Alfson, Kendra J; Griffiths, Anthony

    2018-03-13

    Complete inactivation of infectious Ebola virus (EBOV) is required before a sample may be removed from a Biosafety Level 4 laboratory. The United States Federal Select Agent Program regulations require that procedures used to demonstrate chemical inactivation must be validated in-house to confirm complete inactivation. The objective of this study was to develop a method for validating chemical inactivation of EBOV and then demonstrate the effectiveness of several commonly-used inactivation methods. Samples containing infectious EBOV ( Zaire ebolavirus ) in different matrices were treated, and the sample was diluted to limit the cytopathic effect of the inactivant. The presence of infectious virus was determined by assessing the cytopathic effect in Vero E6 cells. Crucially, this method did not result in a loss of infectivity in control samples, and we were able to detect less than five infectious units of EBOV ( Zaire ebolavirus ). We found that TRIzol LS reagent and RNA-Bee inactivated EBOV in serum; TRIzol LS reagent inactivated EBOV in clarified cell culture media; TRIzol reagent inactivated EBOV in tissue and infected Vero E6 cells; 10% neutral buffered formalin inactivated EBOV in tissue; and osmium tetroxide vapors inactivated EBOV on transmission electron microscopy grids. The methods described herein are easily performed and can be adapted to validate inactivation of viruses in various matrices and by various chemical methods.

  13. Towards a conceptual framework demonstrating the effectiveness of audiovisual patient descriptions (patient video cases): a review of the current literature

    PubMed Central

    2012-01-01

    Background Technological advances have enabled the widespread use of video cases via web-streaming and online download as an educational medium. The use of real subjects to demonstrate acute pathology should aid the education of health care professionals. However, the methodology by which this effect may be tested is not clear. Methods We undertook a literature review of major databases, found relevant articles relevant to using patient video cases as educational interventions, extracted the methodologies used and assessed these methods for internal and construct validity. Results A review of 2532 abstracts revealed 23 studies meeting the inclusion criteria and a final review of 18 of relevance. Medical students were the most commonly studied group (10 articles) with a spread of learner satisfaction, knowledge and behaviour tested. Only two of the studies fulfilled defined criteria on achieving internal and construct validity. The heterogeneity of articles meant it was not possible to perform any meta-analysis. Conclusions Previous studies have not well classified which facet of training or educational outcome the study is aiming to explore and had poor internal and construct validity. Future research should aim to validate a particular outcome measure, preferably by reproducing previous work rather than adopting new methods. In particular cognitive processing enhancement, demonstrated in a number of the medical student studies, should be tested at a postgraduate level. PMID:23256787

  14. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  15. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  16. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  17. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  18. On the accuracy of aerosol photoacoustic spectrometer calibrations using absorption by ozone

    NASA Astrophysics Data System (ADS)

    Davies, Nicholas W.; Cotterell, Michael I.; Fox, Cathryn; Szpek, Kate; Haywood, Jim M.; Langridge, Justin M.

    2018-04-01

    In recent years, photoacoustic spectroscopy has emerged as an invaluable tool for the accurate measurement of light absorption by atmospheric aerosol. Photoacoustic instruments require calibration, which can be achieved by measuring the photoacoustic signal generated by known quantities of gaseous ozone. Recent work has questioned the validity of this approach at short visible wavelengths (404 nm), indicating systematic calibration errors of the order of a factor of 2. We revisit this result and test the validity of the ozone calibration method using a suite of multipass photoacoustic cells operating at wavelengths 405, 514 and 658 nm. Using aerosolised nigrosin with mobility-selected diameters in the range 250-425 nm, we demonstrate excellent agreement between measured and modelled ensemble absorption cross sections at all wavelengths, thus demonstrating the validity of the ozone-based calibration method for aerosol photoacoustic spectroscopy at visible wavelengths.

  19. Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Chakraborty, Sudipta; Lauss, Georg

    This paper presents a concise description of state-of-the-art real-time simulation-based testing methods and demonstrates how they can be used independently and/or in combination as an integrated development and validation approach for smart grid DERs and systems. A three-part case study demonstrating the application of this integrated approach at the different stages of development and validation of a system-integrated smart photovoltaic (PV) inverter is also presented. Laboratory testing results and perspectives from two international research laboratories are included in the case study.

  1. Identifying and classifying hyperostosis frontalis interna via computerized tomography.

    PubMed

    May, Hila; Peled, Nathan; Dar, Gali; Hay, Ori; Abbas, Janan; Masharawi, Youssef; Hershkovitz, Israel

    2010-12-01

    The aim of this study was to recognize the radiological characteristics of hyperostosis frontalis interna (HFI) and to establish a valid and reliable method for its identification and classification. A reliability test was carried out on 27 individuals who had undergone a head computerized tomography (CT) scan. Intra-observer reliability was obtained by examining the images three times, by the same researcher, with a 2-week interval between each sample ranking. The inter-observer test was performed by three independent researchers. A validity test was carried out using two methods for identifying and classifying HFI: 46 cadaver skullcaps were ranked twice via computerized tomography scans and then by direct observation. Reliability and validity were calculated using Kappa test (SPSS 15.0). Reliability tests of ranking HFI via CT scans demonstrated good results (K > 0.7). As for validity, a very good consensus was obtained between the CT and direct observation, when moderate and advanced types of HFI were present (K = 0.82). The suggested classification method for HFI, using CT, demonstrated a sensitivity of 84%, specificity of 90.5%, and positive predictive value of 91.3%. In conclusion, volume rendering is a reliable and valid tool for identifying HFI. The suggested three-scale classification is most suitable for radiological diagnosis of the phenomena. Considering the increasing awareness of HFI as an early indicator of a developing malady, this study may assist radiologists in identifying and classifying the phenomena.

  2. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  3. [Nursing on the Web: the creation and validation process of a web site on coronary artery disease].

    PubMed

    Marques, Isaac Rosa; Marin, Heimar de Fátima

    2002-01-01

    The World Wide Web is an important health information research source. A challenge for the Brazilian Nursing Informatics area is to use its potential to promote health education. This paper aims to present a developing and validating model used in an educational Web site, named CardioSite, which subject is Coronary Heart Disease. In its creation it was adopted a method with phases of conceptual modeling, development, implementation, and evaluation. In the evaluation phase, the validation was performed through an online informatics and health experts panel. The results demonstrated that information was reliable and valid. Considering that national official systems are not available to that approach, this model demonstrated effectiveness in assessing the quality of the Web site content.

  4. Validation of an Empathy Scale in Pharmacy and Nursing Students

    PubMed Central

    Chen, Aleda M. H.; Yehle, Karen S.; Plake, Kimberly S.

    2013-01-01

    Objective. To validate an empathy scale to measure empathy in pharmacy and nursing students. Methods. A 15-item instrument comprised of the cognitive and affective empathy domains, was created. Each item was rated using a 7-point Likert scale, ranging from strongly disagree to strongly agree. Concurrent validity was demonstrated with the Jefferson Scale of Empathy – Health Professional Students (JSE-HPS). Results. Reliability analysis of data from 216 students (pharmacy, N=158; nursing, N=58) showed that scores on the empathy scale were positively associated with JSE-HPS scores (p<0.001). Factor analysis confirmed that 14 of the 15 items were significantly associated with their respective domain, but the overall instrument had limited goodness of fit. Conclusions. Results of this study demonstrate the reliability and validity of a new scale for evaluating student empathy. Further testing of the scale at other universities is needed to establish validity. PMID:23788805

  5. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  6. Development of a quantitative multi-compound method for the detection of 14 nitrogen-rich adulterants by LC-MS/MS in food materials.

    PubMed

    Frank, Nancy; Bessaire, Thomas; Tarres, Adrienne; Goyon, Alexandre; Delatour, Thierry

    2017-11-01

    The increasing number of food frauds using exogenous nitrogen-rich adulterants to artificially raise the protein content for economically motivated adulteration has demonstrated the need for a robust analytical methodology. This method should be applicable for quality control in operations covering a wide range of analyte concentrations to be able to analyse high levels as usually found in adulteration, as well as low levels due to contamination. The paper describes a LC-MS/MS method covering 14 nitrogen-rich adulterants using a simple and fast sample preparation based on dilution and clean-up by dispersive SPE. Quantification is carried out by isotopic dilution reaching LOQs of 0.05-0.20 mg/kg in a broad range of food matrices (infant formula, liquid milk, dairy ingredient, high protein meal, cereal, infant cereal, and meat/fish powder). Validation of seven commodity groups was performed according to SANCO 12571/2013, giving satisfactory results demonstrating the method's fitness for purpose at the validated range at contamination level. Method ruggedness was further assessed by transferring the developed method into another laboratory devoted to routine testing for quality control. Next to the method description, emphasis is placed on challenges and problems appearing during method development as well as validation. They are discussed in detail and solutions are provided.

  7. Analysis of Carbamate Pesticides: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS666

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less

  8. Approach to method development and validation in capillary electrophoresis for enantiomeric purity testing of active basic pharmaceutical ingredients.

    PubMed

    Sokoliess, Torsten; Köller, Gerhard

    2005-06-01

    A chiral capillary electrophoresis system allowing the determination of the enantiomeric purity of an investigational new drug was developed using a generic method development approach for basic analytes. The method was optimized in terms of type and concentration of both cyclodextrin (CD) and electrolyte, buffer pH, temperature, voltage, and rinsing procedure. Optimal chiral separation of the analyte was obtained using an electrolyte with 2.5% carboxymethyl-beta-CD in 25 mM NaH2PO4 (pH 4.0). Interchanging the inlet and outlet vials after each run improved the method's precision. To assure the method's suitability for the control of enantiomeric impurities in pharmaceutical quality control, its specificity, linearity, precision, accuracy, and robustness were validated according to the requirements of the International Conference on Harmonization. The usefulness of our generic method development approach for the validation of robustness was demonstrated.

  9. International ring trial for the validation of an event-specific Golden Rice 2 quantitative real-time polymerase chain reaction method.

    PubMed

    Jacchia, Sara; Nardini, Elena; Bassani, Niccolò; Savini, Christian; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco

    2015-05-27

    This article describes the international validation of the quantitative real-time polymerase chain reaction (PCR) detection method for Golden Rice 2. The method consists of a taxon-specific assay amplifying a fragment of rice Phospholipase D α2 gene, and an event-specific assay designed on the 3' junction between transgenic insert and plant DNA. We validated the two assays independently, with absolute quantification, and in combination, with relative quantification, on DNA samples prepared in haploid genome equivalents. We assessed trueness, precision, efficiency, and linearity of the two assays, and the results demonstrate that both the assays independently assessed and the entire method fulfill European and international requirements for methods for genetically modified organism (GMO) testing, within the dynamic range tested. The homogeneity of the results of the collaborative trial between Europe and Asia is a good indicator of the robustness of the method.

  10. Cosmetics Europe multi-laboratory pre-validation of the SkinEthic™ reconstituted human corneal epithelium test method for the prediction of eye irritation.

    PubMed

    Alépée, N; Bessou-Touya, S; Cotovio, J; de Smedt, A; de Wever, B; Faller, C; Jones, P; Le Varlet, B; Marrec-Fairley, M; Pfannenbecker, U; Tailhardat, M; van Goethem, F; McNamee, P

    2013-08-01

    Cosmetics Europe, The Personal Care Association, known as Colipa before 2012, conducted a program of technology transfer and assessment of Within/Between Laboratory (WLV/BLV) reproducibility of the SkinEthic™ Reconstituted Human Corneal Epithelium (HCE) as one of two human reconstructed tissue eye irritation test methods. The SkinEthic™ HCE test method involves two exposure time treatment procedures - one for short time exposure (10 min - SE) and the other for long time exposure (60 min - LE) of tissues to test substance. This paper describes pre-validation studies of the SkinEthic™ HCE test method (SE and LE protocols) as well as the Eye Peptide Reactivity Assay (EPRA). In the SE WLV study, 30 substances were evaluated. A consistent outcome with respect to viability measurement across all runs was observed with all substances showing an SD of less than 18%. In the LE WLV study, 44 out of 45 substances were consistently classified. These data demonstrated a high level of reproducibility within laboratory for both the SE and LE treatment procedures. For the LE BLV, 19 out of 20 substances were consistently classified between the three laboratories, again demonstrating a high level of reproducibility between laboratories. The results for EPRA WLV and BLV studies demonstrated that all substances analysed were categorised similarly and that the method is reproducible. The SkinEthic™ HCE test method entered into the experimental phase of a formal ECVAM validation program in 2010. Copyright © 2013. Published by Elsevier Ltd.

  11. Extension and Validation of a Hybrid Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 2

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Shivarama, Ravishankar

    2004-01-01

    The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.

  12. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Nuclear Forensics Applications of Principal Component Analysis on Micro X-ray Fluorescence Images

    DTIC Science & Technology

    analysis on quantified micro x-ray fluorescence intensity values. This method is then applied to address goals of nuclear forensics . Thefirst...researchers in the development and validation of nuclear forensics methods. A method for determining material homogeneity is developed and demonstrated

  14. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD

    PubMed Central

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A

    2018-01-01

    Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255

  15. Performance Validity Testing in Neuropsychology: Methods for Measurement Development and Maximizing Diagnostic Accuracy.

    PubMed

    Wodushek, Thomas R; Greher, Michael R

    2017-05-01

    In the first column in this 2-part series, Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review, the authors introduced performance validity tests (PVTs) and their function, provided a justification for why they are necessary, traced their ongoing endorsement by neuropsychological organizations, and described how they are used and interpreted by ever increasing numbers of clinical neuropsychologists. To enhance readers' understanding of these measures, this second column briefly describes common detection strategies used in PVTs as well as the typical methods used to validate new PVTs and determine cut scores for valid/invalid determinations. We provide a discussion of the latest research demonstrating how neuropsychologists can combine multiple PVTs in a single battery to improve sensitivity/specificity to invalid responding. Finally, we discuss future directions for the research and application of PVTs.

  16. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  17. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  18. Working towards accreditation by the International Standards Organization 15189 Standard: how to validate an in-house developed method an example of lead determination in whole blood by electrothermal atomic absorption spectrometry.

    PubMed

    Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe

    2014-09-01

    Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.

  19. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  20. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    PubMed Central

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  1. Validation of Clinical Testing for Warfarin Sensitivity

    PubMed Central

    Langley, Michael R.; Booker, Jessica K.; Evans, James P.; McLeod, Howard L.; Weck, Karen E.

    2009-01-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 −1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses. PMID:19324988

  2. Validation of clinical testing for warfarin sensitivity: comparison of CYP2C9-VKORC1 genotyping assays and warfarin-dosing algorithms.

    PubMed

    Langley, Michael R; Booker, Jessica K; Evans, James P; McLeod, Howard L; Weck, Karen E

    2009-05-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 -1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses.

  3. Validity of Computer Adaptive Tests of Daily Routines for Youth with Spinal Cord Injury

    PubMed Central

    Haley, Stephen M.

    2013-01-01

    Objective: To evaluate the accuracy of computer adaptive tests (CATs) of daily routines for child- and parent-reported outcomes following pediatric spinal cord injury (SCI) and to evaluate the validity of the scales. Methods: One hundred ninety-six daily routine items were administered to 381 youths and 322 parents. Pearson correlations, intraclass correlation coefficients (ICC), and 95% confidence intervals (CI) were calculated to evaluate the accuracy of simulated 5-item, 10-item, and 15-item CATs against the full-item banks and to evaluate concurrent validity. Independent samples t tests and analysis of variance were used to evaluate the ability of the daily routine scales to discriminate between children with tetraplegia and paraplegia and among 5 motor groups. Results: ICC and 95% CI demonstrated that simulated 5-, 10-, and 15-item CATs accurately represented the full-item banks for both child- and parent-report scales. The daily routine scales demonstrated discriminative validity, except between 2 motor groups of children with paraplegia. Concurrent validity of the daily routine scales was demonstrated through significant relationships with the FIM scores. Conclusion: Child- and parent-reported outcomes of daily routines can be obtained using CATs with the same relative precision of a full-item bank. Five-item, 10-item, and 15-item CATs have discriminative and concurrent validity. PMID:23671380

  4. Case Comparison of Response To Aquatic Exercise: Acute versus Chronic Conditions.

    ERIC Educational Resources Information Center

    Mobily, Kenneth E.; Mobily, Paula R.; Lessard, Kerry A.; Berkenpas, Molly S.

    2000-01-01

    Describes the effects of individualized aquatic exercise programs on people with knee impairments. An adolescent athlete with an acute injury demonstrated significant functional improvement. A 33-year-old with arthritis demonstrated only marginal progress. Comparison of cases relative to valid data collection methods and response to aquatic…

  5. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    NASA Technical Reports Server (NTRS)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  6. History and development of the Schmidt-Hunter meta-analysis methods.

    PubMed

    Schmidt, Frank L

    2015-09-01

    In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Analysis of Phosphonic Acids: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled Analysis of Diisopropyl Methylphosphonate, Ethyl Hydrogen Dimethylamidophosphate, Isopropyl Methylphosphonic Acid, Methylphosphonic Acid, and Pinacolyl Methylphosphonic Acid in Water by Multiple Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry: EPA Version MS999. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures describedmore » in EPA Method MS999 for analysis of the listed phosphonic acids and surrogates in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of EPA Method MS999 can be determined.« less

  8. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    PubMed

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  9. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  10. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  12. STANDARD OPERATING PROCEDURE FOR QUALITY ASSURANCE IN ANALYTICAL CHEMISTRY METHODS DEVELOPMENT

    EPA Science Inventory

    The Environmental Protection Agency's (EPA) Office of Research and Development (ORD) is engaged in the development, demonstration, and validation of new or newly adapted methods of analysis for environmentally related samples. Recognizing that a "one size fits all" approach to qu...

  13. Cluster analysis of molecular simulation trajectories for systems where both conformation and orientation of the sampled states are important.

    PubMed

    Abramyan, Tigran M; Snyder, James A; Thyparambil, Aby A; Stuart, Steven J; Latour, Robert A

    2016-08-05

    Clustering methods have been widely used to group together similar conformational states from molecular simulations of biomolecules in solution. For applications such as the interaction of a protein with a surface, the orientation of the protein relative to the surface is also an important clustering parameter because of its potential effect on adsorbed-state bioactivity. This study presents cluster analysis methods that are specifically designed for systems where both molecular orientation and conformation are important, and the methods are demonstrated using test cases of adsorbed proteins for validation. Additionally, because cluster analysis can be a very subjective process, an objective procedure for identifying both the optimal number of clusters and the best clustering algorithm to be applied to analyze a given dataset is presented. The method is demonstrated for several agglomerative hierarchical clustering algorithms used in conjunction with three cluster validation techniques. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Derivation of a Performance Checklist for Ultrasound-Guided Arthrocentesis Using the Modified Delphi Method.

    PubMed

    Kunz, Derek; Pariyadath, Manoj; Wittler, Mary; Askew, Kim; Manthey, David; Hartman, Nicholas

    2017-06-01

    Arthrocentesis is an important skill for physicians in multiple specialties. Recent studies indicate a superior safety and performance profile for this procedure using ultrasound guidance for needle placement, and improving quality of care requires a valid measurement of competency using this modality. We endeavored to create a validated tool to assess the performance of this procedure using the modified Delphi technique and experts in multiple disciplines across the United States. We derived a 22-item checklist designed to assess competency for the completion of ultrasound-guided arthrocentesis, which demonstrated a Cronbach's alpha of 0.89, indicating an excellent degree of internal consistency. Although we were able to demonstrate content validity for this tool, further validity evidence should be acquired after the tool is used and studied in clinical and simulated contexts. © 2017 by the American Institute of Ultrasound in Medicine.

  15. Validation and Clinical Evaluation of a Novel Method To Measure Miltefosine in Leishmaniasis Patients Using Dried Blood Spot Sample Collection

    PubMed Central

    Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.

    2016-01-01

    To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691

  16. Development and full validation of an UPLC-MS/MS method for the determination of an anti-allergic indolinone derivative in rat plasma, and application to a preliminary pharmacokinetic study.

    PubMed

    Oufir, Mouhssin; Sampath, Chethan; Butterweck, Veronika; Hamburger, Matthias

    2012-08-01

    The natural product (E,Z)-3-(4-hydroxy-3,5-dimethoxybenzylidene)indolin-2-one (indolinone) was identified some years ago as a nanomolar inhibitor of FcɛRI-receptor dependent mast cell degranulation. To further explore the potential of the compound, we established an UPLC-MS/MS assay for dosage in rat plasma. The method was fully validated according to FDA Guidance for industry. Results of this validation and long term stability study demonstrate that the method in lithium heparinized rat plasma is specific, accurate, precise and capable of producing reliable results according to recommendations of international guidelines. The method was validated with a LLOQ of 30.0 ng/mL and an ULOQ of 3000 ng/mL. The response versus concentration data were fitted with a first order polynomial with 1/X(2) weighting. No matrix effect was observed when using three independent sources of rat plasma. The average extraction recovery was consistent over the investigated range. This validation in rat plasma demonstrated that indolinone was stable for 190 days when stored below -65 °C; for 4 days at 10 °C in the autosampler; for 4h at RT, and during three successive freeze/thaw cycles at -65 °C. Preliminary pharmacokinetic data were obtained in male Sprague-Dawley rats (2 mg/kg BW i.v.). Blood samples taken from 0 to 12 h after injection were collected, and data analyzed with WinNonlin. A short half-life (4.30±0.14 min) and a relatively high clearance (3.83±1.46 L/h/kg) were found. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Improving the sensitivity and specificity of a bioanalytical assay for the measurement of certolizumab pegol.

    PubMed

    Smeraglia, John; Silva, John-Paul; Jones, Kieran

    2017-08-01

    In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.

  18. Quantifying frontal plane knee motion during single limb squats: reliability and validity of 2-dimensional measures.

    PubMed

    Gwynne, Craig R; Curran, Sarah A

    2014-12-01

    Clinical assessment of lower limb kinematics during dynamic tasks may identify individuals who demonstrate abnormal movement patterns that may lead to etiology of exacerbation of knee conditions such as patellofemoral joint (PFJt) pain. The purpose of this study was to determine the reliability, validity and associated measurement error of a clinically appropriate two-dimensional (2-D) procedure of quantifying frontal plane knee alignment during single limb squats. Nine female and nine male recreationally active subjects with no history of PFJt pain had frontal plane limb alignment assessed using three-dimensional (3-D) motion analysis and digital video cameras (2-D analysis) while performing single limb squats. The association between 2-D and 3-D measures was quantified using Pearson's product correlation coefficients. Intraclass correlation coefficients (ICCs) were determined for within- and between-session reliability of 2-D data and standard error of measurement (SEM) was used to establish measurement error. Frontal plane limb alignment assessed with 2-D analysis demonstrated good correlation compared with 3-D methods (r = 0.64 to 0.78, p < 0.001). Within-session (0.86) and between-session ICCs (0.74) demonstrated good reliability for 2-D measures and SEM scores ranged from 2° to 4°. 2-D measures have good consistency and may provide a valid measure of lower limb alignment when compared to existing 3-D methods. Assessment of lower limb kinematics using 2-D methods may be an accurate and clinically useful alternative to 3-D motion analysis when identifying individuals who demonstrate abnormal movement patterns associated with PFJt pain. 2b.

  19. Testing the feasibility of eliciting preferences for health states from adolescents using direct methods.

    PubMed

    Crump, R Trafford; Lau, Ryan; Cox, Elizabeth; Currie, Gillian; Panepinto, Julie

    2018-06-22

    Measuring adolescents' preferences for health states can play an important role in evaluating the delivery of pediatric healthcare. However, formal evaluation of the common direct preference elicitation methods for health states has not been done with adolescents. Therefore, the purpose of this study is to test how these methods perform in terms of their feasibility, reliability, and validity for measuring health state preferences in adolescents. This study used a web-based survey of adolescents, 18 years of age or younger, living in the United States. The survey included four health states, each comprised of six attributes. Preferences for these health states were elicited using the visual analogue scale, time trade-off, and standard gamble. The feasibility, test-retest reliability, and construct validity of each of these preference elicitation methods were tested and compared. A total of 144 participants were included in this study. Using a web-based survey format to elicit preferences for health states from adolescents was feasible. A majority of participants completed all three elicitation methods, ranked those methods as being easy, with very few requiring assistance from someone else. However, all three elicitation methods demonstrated weak test-retest reliability, with Kendall's tau-a values ranging from 0.204 to 0.402. Similarly, all three methods demonstrated poor construct validity, with 9-50% of all rankings aligning with our expectations. There were no significant differences across age groups. Using a web-based survey format to elicit preferences for health states from adolescents is feasible. However, the reliability and construct validity of the methods used to elicit these preferences when using this survey format are poor. Further research into the effects of a web-based survey approach to eliciting preferences for health states from adolescents is needed before health services researchers or pediatric clinicians widely employ these methods.

  20. Evaluation of the Thermo Scientific SureTect Listeria species assay. AOAC Performance Tested Method 071304.

    PubMed

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron

    2014-01-01

    The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  1. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  2. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  3. Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity

    PubMed Central

    Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.

    2015-01-01

    Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951

  4. Methods of Measurement in epidemiology: Sedentary Behaviour

    PubMed Central

    Atkin, Andrew J; Gorely, Trish; Clemes, Stacy A; Yates, Thomas; Edwardson, Charlotte; Brage, Soren; Salmon, Jo; Marshall, Simon J; Biddle, Stuart JH

    2012-01-01

    Background Research examining sedentary behaviour as a potentially independent risk factor for chronic disease morbidity and mortality has expanded rapidly in recent years. Methods We present a narrative overview of the sedentary behaviour measurement literature. Subjective and objective methods of measuring sedentary behaviour suitable for use in population-based research with children and adults are examined. The validity and reliability of each method is considered, gaps in the literature specific to each method identified and potential future directions discussed. Results To date, subjective approaches to sedentary behaviour measurement, e.g. questionnaires, have focused predominantly on TV viewing or other screen-based behaviours. Typically, such measures demonstrate moderate reliability but slight to moderate validity. Accelerometry is increasingly being used for sedentary behaviour assessments; this approach overcomes some of the limitations of subjective methods, but detection of specific postures and postural changes by this method is somewhat limited. Instruments developed specifically for the assessment of body posture have demonstrated good reliability and validity in the limited research conducted to date. Miniaturization of monitoring devices, interoperability between measurement and communication technologies and advanced analytical approaches are potential avenues for future developments in this field. Conclusions High-quality measurement is essential in all elements of sedentary behaviour epidemiology, from determining associations with health outcomes to the development and evaluation of behaviour change interventions. Sedentary behaviour measurement remains relatively under-developed, although new instruments, both objective and subjective, show considerable promise and warrant further testing. PMID:23045206

  5. Validation of the Thermo Scientific SureTect Escherichia coli O157:H7 Real-Time PCR Assay for Raw Beef and Produce Matrixes.

    PubMed

    Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko

    2015-01-01

    The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E. coli O157:NM isolate. Nonmotile isolates of E. coli O157 have been demonstrated to still contain the H7 gene; therefore, this result is not unexpected. Robustness testing was conducted to evaluate the performance of the SureTect assay with specific deviations to the assay protocol, which were outside the recommended parameters and which are open to variation. This study demonstrated that the SureTect assay gave reliable performance. A final study to verify the shelf life of the product, under accelerated conditions was also conducted.

  6. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  7. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  8. Global approach for the validation of an in-line Raman spectroscopic method to determine the API content in real-time during a hot-melt extrusion process.

    PubMed

    Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E

    2017-08-15

    Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Correlates of the Rosenberg Self-Esteem Scale Method Effects

    ERIC Educational Resources Information Center

    Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan

    2006-01-01

    Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…

  10. UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.

    2016-01-01

    An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.

  11. Method validation for methanol quantification present in working places

    NASA Astrophysics Data System (ADS)

    Muna, E. D. M.; Bizarri, C. H. B.; Maciel, J. R. M.; da Rocha, G. P.; de Araújo, I. O.

    2015-01-01

    Given the widespread use of methanol by different industry sectors and high toxicity associated with this substance, it is necessary to use an analytical method able to determine in a sensitive, precise and accurate levels of methanol in the air of working environments. Based on the methodology established by the National Institute for Occupational Safety and Health (NIOSH), it was validated a methodology for determination of methanol in silica gel tubes which had demonstrated its effectiveness based on the participation of the international collaborative program sponsored by the American Industrial Hygiene Association (AIHA).

  12. Measuring adverse events in helicopter emergency medical services: establishing content validity.

    PubMed

    Patterson, P Daniel; Lave, Judith R; Martin-Gill, Christian; Weaver, Matthew D; Wadas, Richard J; Arnold, Robert M; Roth, Ronald N; Mosesso, Vincent N; Guyette, Francis X; Rittenberger, Jon C; Yealy, Donald M

    2014-01-01

    We sought to create a valid framework for detecting adverse events (AEs) in the high-risk setting of helicopter emergency medical services (HEMS). We assembled a panel of 10 expert clinicians (n = 6 emergency medicine physicians and n = 4 prehospital nurses and flight paramedics) affiliated with a large multistate HEMS organization in the Northeast US. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the content validity index (CVI), to quantify the validity of the framework's content. The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: (1) a trigger tool, (2) a method for rating proximal cause, and (3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. We demonstrate a standardized process for the development of a content-valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS.

  13. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  14. Fluorescent adduct formation with terbium: a novel strategy for transferrin glycoform identification in human body fluids and carbohydrate-deficient transferrin HPLC method validation.

    PubMed

    Sorio, Daniela; De Palo, Elio Franco; Bertaso, Anna; Bortolotti, Federica; Tagliaro, Franco

    2017-02-01

    This paper puts forward a new method for the transferrin (Tf) glycoform analysis in body fluids that involves the formation of a transferrin-terbium fluorescent adduct (TfFluo). The key idea is to validate the analytical procedure for carbohydrate-deficient transferrin (CDT), a traditional biochemical serum marker to identify chronic alcohol abuse. Terbium added to a human body-fluid sample produced TfFluo. Anion exchange HPLC technique, with fluorescence detection (λ exc 298 nm and λ em 550 nm), permitted clear separation and identification of Tf glycoform peaks without any interfering signals, allowing selective Tf sialoforms analysis in human serum and body fluids (cadaveric blood, cerebrospinal fluid, and dried blood spots) hampered for routine test. Serum samples (n = 78) were analyzed by both traditional absorbance (Abs) and fluorescence (Fl) HPLC methods and CDT% levels demonstrated a significant correlation (p < 0.001 Pearson). Intra- and inter-runs CV% was 3.1 and 4.6%, respectively. The cut-off of 1.9 CDT%, related to the HPLC Abs proposed as the reference method, by interpolation in the correlation curve with the present method demonstrated a 1.3 CDT% cut-off. Method comparison by Passing-Bablok and Bland-Altman tests demonstrated Fl versus Abs agreement. In conclusion, the novel method is a reliable test for CDT% analysis and provides a substantial analytical improvement offering important advantages in terms of types of body fluid analysis. Its sensitivity and absence of interferences extend clinical applications being reliable for CDT assay on body fluids usually not suitable for routine test. Graphical Abstract The formation of a transferrin-terbium fluorescent adduct can be used to analyze the transferrin glycoforms. The HPLC method for carbohydrate-deficient transferrin (CDT%) measurement was validated and employed to determine the levels in different body fluids.

  15. An extended protocol for usability validation of medical devices: Research design and reference model.

    PubMed

    Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten

    2017-05-01

    This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Validity of Dietary Assessment in Athletes: A Systematic Review

    PubMed Central

    Beck, Kathryn L.; Gifford, Janelle A.; Slater, Gary; Flood, Victoria M.; O’Connor, Helen

    2017-01-01

    Dietary assessment methods that are recognized as appropriate for the general population are usually applied in a similar manner to athletes, despite the knowledge that sport-specific factors can complicate assessment and impact accuracy in unique ways. As dietary assessment methods are used extensively within the field of sports nutrition, there is concern the validity of methodologies have not undergone more rigorous evaluation in this unique population sub-group. The purpose of this systematic review was to compare two or more methods of dietary assessment, including dietary intake measured against biomarkers or reference measures of energy expenditure, in athletes. Six electronic databases were searched for English-language, full-text articles published from January 1980 until June 2016. The search strategy combined the following keywords: diet, nutrition assessment, athlete, and validity; where the following outcomes are reported but not limited to: energy intake, macro and/or micronutrient intake, food intake, nutritional adequacy, diet quality, or nutritional status. Meta-analysis was performed on studies with sufficient methodological similarity, with between-group standardized mean differences (or effect size) and 95% confidence intervals (CI) being calculated. Of the 1624 studies identified, 18 were eligible for inclusion. Studies comparing self-reported energy intake (EI) to energy expenditure assessed via doubly labelled water were grouped for comparison (n = 11) and demonstrated mean EI was under-estimated by 19% (−2793 ± 1134 kJ/day). Meta-analysis revealed a large pooled effect size of −1.006 (95% CI: −1.3 to −0.7; p < 0.001). The remaining studies (n = 7) compared a new dietary tool or instrument to a reference method(s) (e.g., food record, 24-h dietary recall, biomarker) as part of a validation study. This systematic review revealed there are limited robust studies evaluating dietary assessment methods in athletes. Existing literature demonstrates the substantial variability between methods, with under- and misreporting of intake being frequently observed. There is a clear need for careful validation of dietary assessment methods, including emerging technical innovations, among athlete populations. PMID:29207495

  18. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  19. VARSEDIG: an algorithm for morphometric characters selection and statistical validation in morphological taxonomy.

    PubMed

    Guisande, Cástor; Vari, Richard P; Heine, Jürgen; García-Roselló, Emilio; González-Dacosta, Jacinto; Perez-Schofield, Baltasar J García; González-Vilas, Luis; Pelayo-Villamil, Patricia

    2016-09-12

    We present and discuss VARSEDIG, an algorithm which identifies the morphometric features that significantly discriminate two taxa and validates the morphological distinctness between them via a Monte-Carlo test. VARSEDIG is freely available as a function of the RWizard application PlotsR (http://www.ipez.es/RWizard) and as R package on CRAN. The variables selected by VARSEDIG with the overlap method were very similar to those selected by logistic regression and discriminant analysis, but overcomes some shortcomings of these methods. VARSEDIG is, therefore, a good alternative by comparison to current classical classification methods for identifying morphometric features that significantly discriminate a taxon and for validating its morphological distinctness from other taxa. As a demonstration of the potential of VARSEDIG for this purpose, we analyze morphological discrimination among some species of the Neotropical freshwater family Characidae.

  20. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  1. Validation of a two-dimensional liquid chromatography method for quality control testing of pharmaceutical materials.

    PubMed

    Yang, Samuel H; Wang, Jenny; Zhang, Kelly

    2017-04-07

    Despite the advantages of 2D-LC, there is currently little to no work in demonstrating the suitability of these 2D-LC methods for use in a quality control (QC) environment for good manufacturing practice (GMP) tests. This lack of information becomes more critical as the availability of commercial 2D-LC instrumentation has significantly increased, and more testing facilities begin to acquire these 2D-LC capabilities. It is increasingly important that the transferability of developed 2D-LC methods be assessed in terms of reproducibility, robustness and performance across different laboratories worldwide. The work presented here focuses on the evaluation of a heart-cutting 2D-LC method used for the analysis of a pharmaceutical material, where a key, co-eluting impurity in the first dimension ( 1 D) is resolved from the main peak and analyzed in the second dimension ( 2 D). A design-of-experiments (DOE) approach was taken in the collection of the data, and the results were then modeled in order to evaluate method robustness using statistical modeling software. This quality by design (QBD) approach gives a deeper understanding of the impact of these 2D-LC critical method attributes (CMAs) and how they affect overall method performance. Although there are multiple parameters that may be critical from method development point of view, a special focus of this work is devoted towards evaluation of unique 2D-LC critical method attributes from method validation perspective that transcend conventional method development and validation. The 2D-LC method attributes are evaluated for their recovery, peak shape, and resolution of the two co-eluting compounds in question on the 2 D. In the method, linearity, accuracy, precision, repeatability, and sensitivity are assessed along with day-to-day, analyst-to-analyst, and lab-to-lab (instrument-to-instrument) assessments. The results of this validation study demonstrate that the 2D-LC method is accurate, sensitive, and robust and is ultimately suitable for QC testing with good method transferability. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. In-house validation study of the DuPont Qualicon BAX system Q7 instrument with the BAX system PCR Assay for Salmonella (modification of AOAC Official Method 2003.09 and AOAC Research Institute Performance-Tested Method 100201).

    PubMed

    Tice, George; Andaloro, Bridget; White, H Kirk; Bolton, Lance; Wang, Siqun; Davis, Eugene; Wallace, Morgan

    2009-01-01

    In 2006, DuPont Qualicon introduced the BAX system Q7 instrument for use with its assays. To demonstrate the equivalence of the new and old instruments, a validation study was conducted using the BAX system PCR Assay for Salmonella, AOAC Official Method 2003.09, on three food types. The foods were simultaneously analyzed with the BAX system Q7 instrument and either the U.S. Food and Drug Administration Bacteriological Analytical Manual or the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook reference method for detecting Salmonella. Comparable performance between the BAX system and the reference methods was observed. Of the 75 paired samples analyzed, 39 samples were positive by both the BAX system and reference methods, and 36 samples were negative by both the BAX system and reference methods, demonstrating 100% correlation. Inclusivity and exclusivity for the BAX system Q7 instrument were also established by testing 50 Salmonella strains and 20 non-Salmonella isolates. All Salmonella strains returned positive results, and all non-Salmonella isolates returned a negative response.

  3. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  4. Multimethod Investigation of Interpersonal Functioning in Borderline Personality Disorder

    PubMed Central

    Stepp, Stephanie D.; Hallquist, Michael N.; Morse, Jennifer Q.; Pilkonis, Paul A.

    2011-01-01

    Even though interpersonal functioning is of great clinical importance for patients with borderline personality disorder (BPD), the comparative validity of different assessment methods for interpersonal dysfunction has not yet been tested. This study examined multiple methods of assessing interpersonal functioning, including self- and other-reports, clinical ratings, electronic diaries, and social cognitions in three groups of psychiatric patients (N=138): patients with (1) BPD, (2) another personality disorder, and (3) Axis I psychopathology only. Using dominance analysis, we examined the predictive validity of each method in detecting changes in symptom distress and social functioning six months later. Across multiple methods, the BPD group often reported higher interpersonal dysfunction scores compared to other groups. Predictive validity results demonstrated that self-report and electronic diary ratings were the most important predictors of distress and social functioning. Our findings suggest that self-report scores and electronic diary ratings have high clinical utility, as these methods appear most sensitive to change. PMID:21808661

  5. A Serious Game for Clinical Assessment of Cognitive Status: Validation Study

    PubMed Central

    Chignell, Mark; Tierney, Mary C.; Lee, Jacques

    2016-01-01

    Background We propose the use of serious games to screen for abnormal cognitive status in situations where it may be too costly or impractical to use standard cognitive assessments (eg, emergency departments). If validated, serious games in health care could enable broader availability of efficient and engaging cognitive screening. Objective The objective of this work is to demonstrate the feasibility of a game-based cognitive assessment delivered on tablet technology to a clinical sample and to conduct preliminary validation against standard mental status tools commonly used in elderly populations. Methods We carried out a feasibility study in a hospital emergency department to evaluate the use of a serious game by elderly adults (N=146; age: mean 80.59, SD 6.00, range 70-94 years). We correlated game performance against a number of standard assessments, including the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and the Confusion Assessment Method (CAM). Results After a series of modifications, the game could be used by a wide range of elderly patients in the emergency department demonstrating its feasibility for use with these users. Of 146 patients, 141 (96.6%) consented to participate and played our serious game. Refusals to play the game were typically due to concerns of family members rather than unwillingness of the patient to play the game. Performance on the serious game correlated significantly with the MoCA (r=–.339, P <.001) and MMSE (r=–.558, P <.001), and correlated (point-biserial correlation) with the CAM (r=.565, P <.001) and with other cognitive assessments. Conclusions This research demonstrates the feasibility of using serious games in a clinical setting. Further research is required to demonstrate the validity and reliability of game-based assessments for clinical decision making. PMID:27234145

  6. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    PubMed

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study.

  7. Evaluation of the Thermo Scientific™ SureTect™ Listeria species Assay.

    PubMed

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-03-01

    The Thermo Scientific™ SureTect™ Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University of Guelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  8. Determination of Aflatoxins and Ochratoxin A in Traditional Turkish Concentrated Fruit Juice Products by Multi-Immunoaffinity Column Cleanup and LC Fluorescence Detection: Single-Laboratory Validation.

    PubMed

    Kaymak, Tugrul; Türker, Levent; Tulay, Hüseyin; Stroka, Joerg

    2018-04-27

    Background : Pekmez and pestil are traditional Turkish foods made from concentrated grapejuice, which can be contaminated with mycotoxins such as aflatoxins and ochratoxin A (OTA). Objective : To carry out a single-laboratory validation of a method to simultaneously determine aflatoxins B 1 , B₂, G 1 , and G₂ and ochratoxin A in pekmez and pestil. Methods : The homogenized sample is extracted with methanol-water (80 + 20) using a high-speed blender. The (sample) extract is filtered, diluted with phosphate-buffered saline solution, and applied to a multi-immunoaffinity column (AFLAOCHRA PREP®). Aflatoxins and ochratoxin A are removed with (neat) methanol and then directly analyzed by reversed-phase LC with fluorescence detection using post-column bromination (Kobra cell®). Results : Test portions of blank pekmez and pestil were spiked with a mixture of aflatoxins and ochratoxin A to give levels ranging from 2.6 to 10.4 μg/kg and 1.0-4.0 μg/kg, respectively. Recoveries for total aflatoxins and ochratoxin A ranged from 84 to 106% and 80-97%, respectively, for spiked samples. Based on results for spiked pekmez and pestil (30 replicates each at three levels), the repeatability RSD ranged from 1.6 to 12% and 2.7-11% for total aflatoxins and ochratoxin A, respectively. Conclusions : The method performance in terms of recovery, repeatability, and detection limits has been demonstrated to be suitable for use as an Official Method. Highlights : First immunoaffinity column method validated for simultaneous analysis of aflatoxins and ochratoxin A in pekmez and pestil. Suitability for use for official purposes in Turkey, demonstrated by single-laboratory validation. Co-occurrence of aflatoxins and OTA in mulberry and carob pekmez reported for the first time.

  9. Quantification of myocardial blood flow using dynamic 320-row multi-detector CT as compared with ¹⁵O-H₂O PET.

    PubMed

    Kikuchi, Yasuka; Oyama-Manabe, Noriko; Naya, Masanao; Manabe, Osamu; Tomiyama, Yuuki; Sasaki, Tsukasa; Katoh, Chietsugu; Kudo, Kohsuke; Tamaki, Nagara; Shirato, Hiroki

    2014-07-01

    This study introduces a method to calculate myocardium blood flow (MBF) and coronary flow reserve (CFR) using the relatively low-dose dynamic 320-row multi-detector computed tomography (MDCT), validates the method against (15)O-H₂O positron-emission tomography (PET) and assesses the CFRs of coronary artery disease (CAD) patients. Thirty-two subjects underwent both dynamic CT perfusion (CTP) and PET perfusion imaging at rest and during pharmacological stress. In 12 normal subjects (pilot group), the calculation method for MBF and CFR was established. In the other 13 normal subjects (validation group), MBF and CFR obtained by dynamic CTP and PET were compared. Finally, the CFRs obtained by dynamic CTP and PET were compared between the validation group and CAD patients (n = 7). Correlation between MBF of MDCT and PET was strong (r = 0.95, P < 0.0001). CFR showed good correlation between dynamic CTP and PET (r = 0.67, P = 0.0126). CFRCT in the CAD group (2.3 ± 0.8) was significantly lower than that in the validation group (5.2 ± 1.8) (P = 0.0011). We established a method for measuring MBF and CFR with the relatively low-dose dynamic MDCT. Lower CFR was well demonstrated in CAD patients by dynamic CTP. • MBF and CFR can be calculated using dynamic CTP with 320-row MDCT. • MBF and CFR showed good correlation between dynamic CTP and PET. • Lower CFR was well demonstrated in CAD patients by dynamic CTP.

  10. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  11. Validation of SCIAMACHY and TOMS UV Radiances Using Ground and Space Observations

    NASA Technical Reports Server (NTRS)

    Hilsenrath, E.; Bhartia, P. K.; Bojkov, B. R.; Kowalewski, M.; Labow, G.; Ahmad, Z.

    2004-01-01

    Verification of a stratospheric ozone recovery remains a high priority for environmental research and policy definition. Models predict an ozone recovery at a much lower rate than the measured depletion rate observed to date. Therefore improved precision of the satellite and ground ozone observing systems are required over the long term to verify its recovery. We show that validation of satellite radiances from space and from the ground can be a very effective means for correcting long term drifts of backscatter type satellite measurements and can be used to cross calibrate all B W instruments in orbit (TOMS, SBW/2, GOME, SCIAMACHY, OM, GOME-2, OMPS). This method bypasses the retrieval algorithms used for both satellite and ground based measurements that are normally used to validate and correct the satellite data. Radiance comparisons employ forward models and are inherently more accurate than inverse (retrieval) algorithms. This approach however requires well calibrated instruments and an accurate radiative transfer model that accounts for aerosols. TOMS and SCIAMACHY calibrations are checked to demonstrate this method and to demonstrate applicability for long term trends.

  12. [Symptom and complaint validation of chronic pain in social medical evaluation. Part I: Terminological and methodological approaches].

    PubMed

    Dohrenbusch, R

    2009-06-01

    Chronic pain accompanied by disability and handicap is a frequent symptom necessitating medical assessment. Current guidelines for the assessment of malingering suggest discrimination between explanatory demonstration, aggravation and simulation. However, this distinction has not clearly been put into operation and validated. The necessity of assessment strategies based on general principles of psychological assessment and testing is emphasized. Standardized and normalized psychological assessment methods and symptom validation techniques should be used in the assessment of subjects with chronic pain problems. An adaptive procedure for assessing the validity of complaints is suggested to minimize effort and costs.

  13. Self-homodyne free-space optical communication system based on orthogonally polarized binary phase shift keying.

    PubMed

    Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren

    2016-06-10

    A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.

  14. Validation of Milliflex® Quantum for Bioburden Testing of Pharmaceutical Products.

    PubMed

    Gordon, Oliver; Goverde, Marcel; Staerk, Alexandra; Roesti, David

    2017-01-01

    This article reports the validation strategy used to demonstrate that the Milliflex ® Quantum yielded non-inferior results to the traditional bioburden method. It was validated according to USP <1223>, European Pharmacopoeia 5.1.6, and Parenteral Drug Association Technical Report No. 33 and comprised the validation parameters robustness, ruggedness, repeatability, specificity, limit of detection and quantification, accuracy, precision, linearity, range, and equivalence in routine operation. For the validation, a combination of pharmacopeial ATCC strains as well as a broad selection of in-house isolates were used. In-house isolates were used in stressed state. Results were statistically evaluated regarding the pharmacopeial acceptance criterion of ≥70% recovery compared to the traditional method. Post-hoc test power calculations verified the appropriateness of the used sample size to detect such a difference. Furthermore, equivalence tests verified non-inferiority of the rapid method as compared to the traditional method. In conclusion, the rapid bioburden on basis of the Milliflex ® Quantum was successfully validated as alternative method to the traditional bioburden test. LAY ABSTRACT: Pharmaceutical drug products must fulfill specified quality criteria regarding their microbial content in order to ensure patient safety. Drugs that are delivered into the body via injection, infusion, or implantation must be sterile (i.e., devoid of living microorganisms). Bioburden testing measures the levels of microbes present in the bulk solution of a drug before sterilization, and thus it provides important information for manufacturing a safe product. In general, bioburden testing has to be performed using the methods described in the pharmacopoeias (membrane filtration or plate count). These methods are well established and validated regarding their effectiveness; however, the incubation time required to visually identify microbial colonies is long. Thus, alternative methods that detect microbial contamination faster will improve control over the manufacturing process and speed up product release. Before alternative methods may be used, they must undergo a side-by-side comparison with pharmacopeial methods. In this comparison, referred to as validation, it must be shown in a statistically verified manner that the effectiveness of the alternative method is at least equivalent to that of the pharmacopeial methods. Here we describe the successful validation of an alternative bioburden testing method based on fluorescent staining of growing microorganisms applying the Milliflex ® Quantum system by MilliporeSigma. © PDA, Inc. 2017.

  15. The Analysis of Likert Scales Using State Multipoles: An Application of Quantum Methods to Behavioral Sciences Data

    ERIC Educational Resources Information Center

    Camparo, James; Camparo, Lorinda B.

    2013-01-01

    Though ubiquitous, Likert scaling's traditional mode of analysis is often unable to uncover all of the valid information in a data set. Here, the authors discuss a solution to this problem based on methodology developed by quantum physicists: the state multipole method. The authors demonstrate the relative ease and value of this method by…

  16. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  17. Resolution of ab initio shapes determined from small-angle scattering.

    PubMed

    Tuukkanen, Anne T; Kleywegt, Gerard J; Svergun, Dmitri I

    2016-11-01

    Spatial resolution is an important characteristic of structural models, and the authors of structures determined by X-ray crystallography or electron cryo-microscopy always provide the resolution upon publication and deposition. Small-angle scattering of X-rays or neutrons (SAS) has recently become a mainstream structural method providing the overall three-dimensional structures of proteins, nucleic acids and complexes in solution. However, no quantitative resolution measure is available for SAS-derived models, which significantly hampers their validation and further use. Here, a method is derived for resolution assessment for ab initio shape reconstruction from scattering data. The inherent variability of the ab initio shapes is utilized and it is demonstrated how their average Fourier shell correlation function is related to the model resolution. The method is validated against simulated data for proteins with known high-resolution structures and its efficiency is demonstrated in applications to experimental data. It is proposed that henceforth the resolution be reported in publications and depositions of ab initio SAS models.

  18. Resolution of ab initio shapes determined from small-angle scattering

    PubMed Central

    Tuukkanen, Anne T.; Kleywegt, Gerard J.; Svergun, Dmitri I.

    2016-01-01

    Spatial resolution is an important characteristic of structural models, and the authors of structures determined by X-ray crystallography or electron cryo-microscopy always provide the resolution upon publication and deposition. Small-angle scattering of X-rays or neutrons (SAS) has recently become a mainstream structural method providing the overall three-dimensional structures of proteins, nucleic acids and complexes in solution. However, no quantitative resolution measure is available for SAS-derived models, which significantly hampers their validation and further use. Here, a method is derived for resolution assessment for ab initio shape reconstruction from scattering data. The inherent variability of the ab initio shapes is utilized and it is demonstrated how their average Fourier shell correlation function is related to the model resolution. The method is validated against simulated data for proteins with known high-resolution structures and its efficiency is demonstrated in applications to experimental data. It is proposed that henceforth the resolution be reported in publications and depositions of ab initio SAS models. PMID:27840683

  19. Development of Finer Spatial Resolution Optical Properties from MODIS

    DTIC Science & Technology

    2008-02-04

    infrared (SWIR) channels at 1240 nm and 2130 run. The increased resolution spectral Rrs channels are input into bio-optical algorithms (Quasi...processes. Additionally, increased resolution is required for validation of ocean color products in coastal regions due to the shorter spatial scales of...with in situ Rrs data to determine the "best" method in coastal regimes. We demonstrate that finer resolution is required for validation of coastal

  20. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  1. Challenges and opportunities in bioanalytical support for gene therapy medicinal product development.

    PubMed

    Ma, Mark; Balasubramanian, Nanda; Dodge, Robert; Zhang, Yan

    2017-09-01

    Gene and nucleic acid therapies have demonstrated patient benefits to address unmet medical needs. Beside considerations regarding the biological nature of the gene therapy, the quality of bioanalytical methods plays an important role in ensuring the success of these novel therapies. Inconsistent approaches among bioanalytical labs during preclinical and clinical phases have been observed. There are many underlying reasons for this inconsistency. Various platforms and reagents used in quantitative methods, lacking of detailed regulatory guidance on method validation and uncertainty of immunogenicity strategy in supporting gene therapy may all be influential. This review summarizes recent practices and considerations in bioanalytical support of pharmacokinetics/pharmacodynamics and immunogenicity evaluations in gene therapy development with insight into method design, development and validations.

  2. Utility of NIST Whole-Genome Reference Materials for the Technical Validation of a Multigene Next-Generation Sequencing Test.

    PubMed

    Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J

    2017-07-01

    The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  3. Development of a novel observational measure for anxiety in young children: The Anxiety Dimensional Observation Scale

    PubMed Central

    Mian, Nicholas D.; Carter, Alice S.; Pine, Daniel S.; Wakschlag, Lauren S.; Briggs-Gowan, Margaret J.

    2015-01-01

    Background Identifying anxiety disorders in preschool-age children represents an important clinical challenge. Observation is essential to clinical assessment and can help differentiate normative variation from clinically significant anxiety. Yet, most anxiety assessment methods for young children rely on parent-reports. The goal of this article is to present and preliminarily test the reliability and validity of a novel observational paradigm for assessing a range of fearful and anxious behaviors in young children, the Anxiety Dimensional Observation Schedule (Anx-DOS). Methods A diverse sample of 403 children, aged 3 to 6 years, and their mothers was studied. Reliability and validity in relation to parent reports (Preschool Age Psychiatric Assessment) and known risk factors, including indicators of behavioral inhibition (latency to touch novel objects) and attention bias to threat (in the dot-probe task) were investigated. Results The Anx-DOS demonstrated good inter-rater reliability and internal consistency. Evidence for convergent validity was demonstrated relative to mother-reported separation anxiety, social anxiety, phobic avoidance, trauma symptoms, and past service use. Finally, fearfulness was associated with observed latency and attention bias toward threat. Conclusions Findings support the Anx-DOS as a method for capturing early manifestations of fearfulness and anxiety in young children. Multimethod assessments incorporating standardized methods for assessing discrete, observable manifestations of anxiety may be beneficial for early identification and clinical intervention efforts. PMID:25773515

  4. Hyperspectral Image Classification for Land Cover Based on an Improved Interval Type-II Fuzzy C-Means Approach

    PubMed Central

    Li, Zhao-Liang

    2018-01-01

    Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548

  5. Validity and reliability of a simple, low cost measure to quantify children’s dietary intake in afterschool settings

    PubMed Central

    Davison, Kirsten K.; Austin, S. Bryn; Giles, Catherine; Cradock, Angie L.; Lee, Rebekka M.; Gortmaker, Steven L.

    2017-01-01

    Interest in evaluating and improving children’s diets in afterschool settings has grown, necessitating the development of feasible yet valid measures for capturing children’s intake in such settings. This study’s purpose was to test the criterion validity and cost of three unobtrusive visual estimation methods compared to a plate-weighing method: direct on-site observation using a 4-category rating scale and off-site rating of digital photographs taken on-site using 4- and 10-category scales. Participants were 111 children in grades 1–6 attending four afterschool programs in Boston, MA in December 2011. Researchers observed and photographed 174 total snack meals consumed across two days at each program. Visual estimates of consumption were compared to weighed estimates (the criterion measure) using intra-class correlations. All three methods were highly correlated with the criterion measure, ranging from 0.92–0.94 for total calories consumed, 0.86–0.94 for consumption of pre-packaged beverages, 0.90–0.93 for consumption of fruits/vegetables, and 0.92–0.96 for consumption of grains. For water, which was not pre-portioned, coefficients ranged from 0.47–0.52. The photographic methods also demonstrated excellent inter-rater reliability: 0.84–0.92 for the 4-point and 0.92–0.95 for the 10-point scale. The costs of the methods for estimating intake ranged from $0.62 per observation for the on-site direct visual method to $0.95 per observation for the criterion measure. This study demonstrates that feasible, inexpensive methods can validly and reliably measure children’s dietary intake in afterschool settings. Improving precision in measures of children’s dietary intake can reduce the likelihood of spurious or null findings in future studies. PMID:25596895

  6. Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples.

    PubMed

    Kasturi, Kuppuswamy N; Drgon, Tomas

    2017-07-15

    The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA , group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non- Salmonella organisms. The invA - and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella -differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the V itek i mmuno d iagnostic a ssay s ystem (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples.

  7. Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples

    PubMed Central

    Drgon, Tomas

    2017-01-01

    ABSTRACT The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA, group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non-Salmonella organisms. The invA- and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella-differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S. Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the Vitek immunodiagnostic assay system (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples. PMID:28500041

  8. The design of a joined wing flight demonstrator aircraft

    NASA Technical Reports Server (NTRS)

    Smith, S. C.; Cliff, S. E.; Kroo, I. M.

    1987-01-01

    A joined-wing flight demonstrator aircraft has been developed at the NASA Ames Research Center in collaboration with ACA Industries. The aircraft is designed to utilize the fuselage, engines, and undercarriage of the existing NASA AD-1 flight demonstrator aircraft. The design objectives, methods, constraints, and the resulting aircraft design, called the JW-1, are presented. A wind-tunnel model of the JW-1 was tested in the NASA Ames 12-foot wind tunnel. The test results indicate that the JW-1 has satisfactory flying qualities for a flight demonstrator aircraft. Good agreement of test results with design predictions confirmed the validity of the design methods used for application to joined-wing configurations.

  9. On using sample selection methods in estimating the price elasticity of firms' demand for insurance.

    PubMed

    Marquis, M Susan; Louis, Thomas A

    2002-01-01

    We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.

  10. Development and validation of a method for mercury determination in seawater for the process control of a candidate certified reference material.

    PubMed

    Sánchez, Raquel; Snell, James; Held, Andrea; Emons, Hendrik

    2015-08-01

    A simple, robust and reliable method for mercury determination in seawater matrices based on the combination of cold vapour generation and inductively coupled plasma mass spectrometry (CV-ICP-MS) and its complete in-house validation are described. The method validation covers parameters such as linearity, limit of detection (LOD), limit of quantification (LOQ), trueness, repeatability, intermediate precision and robustness. A calibration curve covering the whole working range was achieved with coefficients of determination typically higher than 0.9992. The repeatability of the method (RSDrep) was 0.5 %, and the intermediate precision was 2.3 % at the target mass fraction of 20 ng/kg. Moreover, the method was robust with respect to the salinity of the seawater. The limit of quantification was 2.7 ng/kg, which corresponds to 13.5 % of the target mass fraction in the future certified reference material (20 ng/kg). An uncertainty budget for the measurement of mercury in seawater has been established. The relative expanded (k = 2) combined uncertainty is 6 %. The performance of the validated method was demonstrated by generating results for process control and a homogeneity study for the production of a candidate certified reference material.

  11. Technical Note: Synchrotron-based high-energy x-ray phase sensitive microtomography for biomedical research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn

    Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less

  12. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  13. Resolution of Forces and Strain Measurements from an Acoustic Ground Test

    NASA Technical Reports Server (NTRS)

    Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.

    2013-01-01

    The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.

  14. Application of qualitative biospeckle methods for the identification of scar region in a green orange

    NASA Astrophysics Data System (ADS)

    Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.

    2018-03-01

    This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.

  15. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  16. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  17. Diagnostic accuracy of eye movements in assessing pedophilia.

    PubMed

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2012-07-01

    Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.

  18. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    PubMed

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  19. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    PubMed

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Application of advanced sampling and analysis methods to predict the structure of adsorbed protein on a material surface

    PubMed Central

    Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.

    2017-01-01

    The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864

  1. Assessment of a condition-specific quality-of-life measure for patients with developmentally absent teeth: validity and reliability testing.

    PubMed

    Akram, A J; Ireland, A J; Postlethwaite, K C; Sandy, J R; Jerreat, A S

    2013-11-01

    This article describes the process of validity and reliability testing of a condition-specific quality-of-life measure for patients with hypodontia presenting for orthodontic treatment. The development of the instrument is described in a previous article. Royal Devon and Exeter NHS Foundation Trust & Musgrove Park Hospital, Taunton. The child perception questionnaire was used as a standard against which to test criterion validity. The Bland and Altman method was used to check agreement between the two questionnaires. Construct validity was tested using principal component analysis on the four sections of the questionnaire. Test-retest reliability was tested using intraclass correlation coefficient and Bland and Altman method. Cronbach's alpha was used to test internal consistency reliability. Overall the questionnaire showed good reliability, criterion and construct validity. This together with previous evidence of good face and content validity suggests that the instrument may prove useful in clinical practice and further research. This study has demonstrated that the newly developed condition-specific quality-of-life questionnaire is both valid and reliable for use in young patients with hypodontia. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  2. Psychiatric OSCE Performance of Students with and without a Previous Core Psychiatry Clerkship

    ERIC Educational Resources Information Center

    Goisman, Robert M.; Levin, Robert M.; Krupat, Edward; Pelletier, Stephen R.; Alpert, Jonathan E.

    2010-01-01

    Objective: The OSCE has been demonstrated to be a reliable and valid method by which to assess students' clinical skills. An OSCE station was used to determine whether or not students who had completed a core psychiatry clerkship demonstrated skills that were superior to those who had not taken the clerkship and which areas discriminated between…

  3. Ask Him or Test Him?

    ERIC Educational Resources Information Center

    Rose, Harriett A.; Elton, Charles F.

    1970-01-01

    Proposes and demonstrates methodology for investigation of relationship between expressed and inventoried interests. Additional investigations comparing scores on the Vocational Preference Inventory, the SVIB, and expressed choice might establish the comparative validities of these methods of assessing vocational interest. (Author)

  4. Evaluation results for intelligent transportation systems

    DOT National Transportation Integrated Search

    2000-11-09

    This presentation covers the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is bei...

  5. Extension of HCDstruct for Transonic Aeroservoelastic Analysis of Unconventional Aircraft Concepts

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2017-01-01

    A substantial effort has been made to implement an enhanced aerodynamic modeling capability in the Higher-fidelity Conceptual Design and structural optimization tool. This additional capability is needed for a rapid, physics-based method of modeling advanced aircraft concepts at risk of structural failure due to dynamic aeroelastic instabilities. To adequately predict these instabilities, in particular for transonic applications, a generalized aerodynamic matching algorithm was implemented to correct the doublet-lattice model available in Nastran using solution data from a priori computational fluid dynamics anal- ysis. This new capability is demonstrated for two tube-and-wing aircraft configurations, including a Boeing 737-200 for implementation validation and the NASA D8 as a first use case. Results validate the current implementation of the aerodynamic matching utility and demonstrate the importance of using such a method for aircraft configurations featuring fuselage-wing aerodynamic interaction.

  6. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  7. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, Reid; McPherson, Brian; Lee, Rober

    The Southwest Regional Partnership on Carbon Sequestration (SWP) one of seven regional partnerships sponsored by the U.S. Department of Energy (USDOE) carried out five field pilot tests in its Phase II Carbon Sequestration Demonstration effort, to validate the most promising sequestration technologies and infrastructure concepts, including three geologic pilot tests and two terrestrial pilot programs. This field testing demonstrated the efficacy of proposed sequestration technologies to reduce or offset greenhouse gas emissions in the region. Risk mitigation, optimization of monitoring, verification, and accounting (MVA) protocols, and effective outreach and communication were additional critical goals of these field validation tests. Themore » program included geologic pilot tests located in Utah, New Mexico, Texas, and a region-wide terrestrial analysis. Each geologic sequestration test site was intended to include injection of a minimum of ~75,000 tons/year CO{sub 2}, with minimum injection duration of one year. These pilots represent medium- scale validation tests in sinks that host capacity for possible larger-scale sequestration operations in the future. These validation tests also demonstrated a broad variety of carbon sink targets and multiple value-added benefits, including testing of enhanced oil recovery and sequestration, enhanced coalbed methane production and a geologic sequestration test combined with a local terrestrial sequestration pilot. A regional terrestrial sequestration demonstration was also carried out, with a focus on improved terrestrial MVA methods and reporting approaches specific for the Southwest region.« less

  9. Does virtual reality simulation have a role in training trauma and orthopaedic surgeons?

    PubMed

    Bartlett, J D; Lawrence, J E; Stewart, M E; Nakano, N; Khanduja, V

    2018-05-01

    Aims The aim of this study was to assess the current evidence relating to the benefits of virtual reality (VR) simulation in orthopaedic surgical training, and to identify areas of future research. Materials and Methods A literature search using the MEDLINE, Embase, and Google Scholar databases was performed. The results' titles, abstracts, and references were examined for relevance. Results A total of 31 articles published between 2004 and 2016 and relating to the objective validity and efficacy of specific virtual reality orthopaedic surgical simulators were identified. We found 18 studies demonstrating the construct validity of 16 different orthopaedic virtual reality simulators by comparing expert and novice performance. Eight studies have demonstrated skill acquisition on a simulator by showing improvements in performance with repeated use. A further five studies have demonstrated measurable improvements in operating theatre performance following a period of virtual reality simulator training. Conclusion The demonstration of 'real-world' benefits from the use of VR simulation in knee and shoulder arthroscopy is promising. However, evidence supporting its utility in other forms of orthopaedic surgery is lacking. Further studies of validity and utility should be combined with robust analyses of the cost efficiency of validated simulators to justify the financial investment required for their use in orthopaedic training. Cite this article: Bone Joint J 2018;100-B:559-65.

  10. Unavoidable Pressure Ulcers: Development and Testing of the Indiana University Health Pressure Ulcer Prevention Inventory.

    PubMed

    Pittman, Joyce; Beeson, Terrie; Terry, Colin; Dillon, Jill; Hampton, Charity; Kerley, Denise; Mosier, Judith; Gumiela, Ellen; Tucker, Jessica

    2016-01-01

    Despite prevention strategies, hospital-acquired pressure ulcers (HAPUs) continue to occur in the acute care setting. The purpose of this study was to develop an operational definition of and an instrument for identifying avoidable/unavoidable HAPUs in the acute care setting. The Indiana University Health Pressure Ulcer Prevention Inventory (PUPI) was developed and psychometric testing was performed. A retrospective pilot study of 31 adult hospitalized patients with an HAPU was conducted using the PUPI. Overall content validity index of 0.99 and individual item content validity index scores (0.9-1.0) demonstrated excellent content validity. Acceptable PUPI criterion validity was demonstrated with no statistically significant differences between wound specialists' and other panel experts' scoring. Construct validity findings were acceptable with no statistically significant differences among avoidable or unavoidable HAPU patients and their Braden Scale total scores. Interrater reliability was acceptable with perfect agreement on the total PUPI score between raters (κ = 1.0; P = .025). Raters were in total agreement 93% (242/260) of the time on all 12 individual PUPI items. No risk factors were found to be significantly associated with unavoidable HAPUs. An operational definition of and an instrument for identifying avoidable/unavoidable HAPUs in the acute care setting were developed and tested. The instrument provides an objective and structured method for identifying avoidable/unavoidable HAPUs. The PUPI provides an additional method that could be used in root-cause analyses and when reporting adverse pressure ulcer events.

  11. Large-scale collision cross-section profiling on a travelling wave ion mobility mass spectrometer

    PubMed Central

    Lietz, Christopher B.; Yu, Qing; Li, Lingjun

    2014-01-01

    Ion mobility (IM) is a gas-phase electrophoretic method that separates ions according to charge and ion-neutral collision cross-section (CCS). Herein, we attempt to apply a travelling wave (TW) IM polyalanine calibration method to shotgun proteomics and create a large peptide CCS database. Mass spectrometry methods that utilize IM, such as HDMSE, often use high transmission voltages for sensitive analysis. However, polyalanine calibration has only been demonstrated with low voltage transmission used to prevent gas-phase activation. If polyalanine ions change conformation under higher transmission voltages used for HDMSE, the calibration may no longer be valid. Thus, we aimed to characterize the accuracy of calibration and CCS measurement under high transmission voltages on a TW IM instrument using the polyalanine calibration method and found that the additional error was not significant. We also evaluated the potential error introduced by liquid chromatography (LC)-HDMSE analysis, and found it to be insignificant as well, validating the calibration method. Finally, we demonstrated the utility of building a large-population peptide CCS database by investigating the effects of terminal lysine position, via LysC or LysN digestion, on the formation of two structural sub-families formed by triply charged ions. PMID:24845359

  12. Validation of the M. D. Anderson Symptom Inventory multiple myeloma module

    PubMed Central

    2013-01-01

    Background The symptom burden associated with multiple myeloma (MM) is often severe. Presently, no instrument comprehensively assesses disease-related and treatment-related symptoms in patients with MM. We sought to validate a module of the M. D. Anderson Symptom Inventory (MDASI) developed specifically for patients with MM (MDASI-MM). Methods The MDASI-MM was developed with clinician input, cognitive debriefing, and literature review, and administered to 132 patients undergoing induction chemotherapy or stem cell transplantation. We demonstrated the MDASI-MM’s reliability (Cronbach α values); criterion validity (item and subscale correlations between the MDASI-MM and the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30) and the EORTC MM module (QLQ-MY20)), and construct validity (differences between groups by performance status). Ratings from transplant patients were examined to demonstrate the MDASI-MM’s sensitivity in detecting the acute worsening of symptoms post-transplantation. Results The MDASI-MM demonstrated excellent correlations with subscales of the 2 EORTC instruments, strong ability to distinguish clinically different patient groups, high sensitivity in detecting change in patients’ performance status, and high reliability. Cognitive debriefing confirmed that the MDASI-MM encompasses the breadth of symptoms relevant to patients with MM. Conclusion The MDASI-MM is a valid, reliable, comprehensive-yet-concise tool that is recommended as a uniform symptom assessment instrument for patients with MM. PMID:23384030

  13. Development and validation of a liquid chromatography-isotope dilution tandem mass spectrometry for determination of olanzapine in human plasma and its application to bioavailability study.

    PubMed

    Zhang, Meng-Qi; Jia, Jing-Ying; Lu, Chuan; Liu, Gang-Yi; Yu, Cheng-Yin; Gui, Yu-Zhou; Liu, Yun; Liu, Yan-Mei; Wang, Wei; Li, Shui-Jun; Yu, Chen

    2010-06-01

    A simple, reliable and sensitive liquid chromatography-isotope dilution mass spectrometry (LC-ID/MS) was developed and validated for quantification of olanzapine in human plasma. Plasma samples (50 microL) were extracted with tert-butyl methyl ether and isotope-labeled internal standard (olanzapine-D3) was used. The chromatographic separation was performed on XBridge Shield RP 18 (100 mm x 2.1 mm, 3.5 microm, Waters). An isocratic program was used at a flow rate of 0.4 m x min(-1) with mobile phase consisting of acetonitrile and ammonium buffer (pH 8). The protonated ions of analytes were detected in positive ionization by multiple reactions monitoring (MRM) mode. The plasma method, with a lower limit of quantification (LLOQ) of 0.1 ng x mL(-1), demonstrated good linearity over a range of 0.1 - 30 ng x mL(-1) of olanzapine. Specificity, linearity, accuracy, precision, recovery, matrix effect and stability were evaluated during method validation. The validated method was successfully applied to analyzing human plasma samples in bioavailability study.

  14. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  15. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  17. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  18. Application of a Method of Estimating DIF for Polytomous Test Items.

    ERIC Educational Resources Information Center

    Camilli, Gregory; Congdon, Peter

    1999-01-01

    Demonstrates a method for studying differential item functioning (DIF) that can be used with dichotomous or polytomous items and that is valid for data that follow a partial credit Item Response Theory model. A simulation study shows that positively biased Type I error rates are in accord with results from previous studies. (SLD)

  19. Spectroscopic characterization and quantitative determination of atorvastatin calcium impurities by novel HPLC method

    NASA Astrophysics Data System (ADS)

    Gupta, Lokesh Kumar

    2012-11-01

    Seven process related impurities were identified by LC-MS in the atorvastatin calcium drug substance. These impurities were identified by LC-MS. The structure of impurities was confirmed by modern spectroscopic techniques like 1H NMR and IR and physicochemical studies conducted by using synthesized authentic reference compounds. The synthesized reference samples of the impurity compounds were used for the quantitative HPLC determination. These impurities were detected by newly developed gradient, reverse phase high performance liquid chromatographic (HPLC) method. The system suitability of HPLC analysis established the validity of the separation. The analytical method was validated according to International Conference of Harmonization (ICH) with respect to specificity, precision, accuracy, linearity, robustness and stability of analytical solutions to demonstrate the power of newly developed HPLC method.

  20. Evaluation results for intelligent transport systems (ITS) : abstract

    DOT National Transportation Integrated Search

    2000-11-09

    This paper summarizes the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is being ...

  1. Task-oriented evaluation of electronic medical records systems: development and validation of a questionnaire for physicians

    PubMed Central

    2004-01-01

    Background Evaluation is a challenging but necessary part of the development cycle of clinical information systems like the electronic medical records (EMR) system. It is believed that such evaluations should include multiple perspectives, be comparative and employ both qualitative and quantitative methods. Self-administered questionnaires are frequently used as a quantitative evaluation method in medical informatics, but very few validated questionnaires address clinical use of EMR systems. Methods We have developed a task-oriented questionnaire for evaluating EMR systems from the clinician's perspective. The key feature of the questionnaire is a list of 24 general clinical tasks. It is applicable to physicians of most specialties and covers essential parts of their information-oriented work. The task list appears in two separate sections, about EMR use and task performance using the EMR, respectively. By combining these sections, the evaluator may estimate the potential impact of the EMR system on health care delivery. The results may also be compared across time, site or vendor. This paper describes the development, performance and validation of the questionnaire. Its performance is shown in two demonstration studies (n = 219 and 80). Its content is validated in an interview study (n = 10), and its reliability is investigated in a test-retest study (n = 37) and a scaling study (n = 31). Results In the interviews, the physicians found the general clinical tasks in the questionnaire relevant and comprehensible. The tasks were interpreted concordant to their definitions. However, the physicians found questions about tasks not explicitly or only partially supported by the EMR systems difficult to answer. The two demonstration studies provided unambiguous results and low percentages of missing responses. In addition, criterion validity was demonstrated for a majority of task-oriented questions. Their test-retest reliability was generally high, and the non-standard scale was found symmetric and ordinal. Conclusion This questionnaire is relevant for clinical work and EMR systems, provides reliable and interpretable results, and may be used as part of any evaluation effort involving the clinician's perspective of an EMR system. PMID:15018620

  2. Myocardial segmentation based on coronary anatomy using coronary computed tomography angiography: Development and validation in a pig model.

    PubMed

    Chung, Mi Sun; Yang, Dong Hyun; Kim, Young-Hak; Kang, Soo-Jin; Jung, Joonho; Kim, Namkug; Heo, Seung-Ho; Baek, Seunghee; Seo, Joon Beom; Choi, Byoung Wook; Kang, Joon-Won; Lim, Tae-Hwan

    2017-10-01

    To validate a method for performing myocardial segmentation based on coronary anatomy using coronary CT angiography (CCTA). Coronary artery-based myocardial segmentation (CAMS) was developed for use with CCTA. To validate and compare this method with the conventional American Heart Association (AHA) classification, a single coronary occlusion model was prepared and validated using six pigs. The unstained occluded coronary territories of the specimens and corresponding arterial territories from CAMS and AHA segmentations were compared using slice-by-slice matching and 100 virtual myocardial columns. CAMS more precisely predicted ischaemic area than the AHA method, as indicated by 95% versus 76% (p < 0.001) of the percentage of matched columns (defined as percentage of matched columns of segmentation method divided by number of unstained columns in the specimen). According to the subgroup analyses, CAMS demonstrated a higher percentage of matched columns than the AHA method in the left anterior descending artery (100% vs. 77%; p < 0.001) and mid- (99% vs. 83%; p = 0.046) and apical-level territories of the left ventricle (90% vs. 52%; p = 0.011). CAMS is a feasible method for identifying the corresponding myocardial territories of the coronary arteries using CCTA. • CAMS is a feasible method for identifying corresponding coronary territory using CTA • CAMS is more accurate in predicting coronary territory than the AHA method • The AHA method may underestimate the ischaemic territory of LAD stenosis.

  3. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  4. Determination of Nitrogen, Phosphorus, and Potassium Release Rates of Slow- and Controlled-Release Fertilizers: Single-Laboratory Validation, First Action 2015.15.

    PubMed

    Thiex, Nancy

    2016-01-01

    A previously validated method for the determination of nitrogen release patterns of slow- and controlled-release fertilizers (SRFs and CRFs, respectively) was submitted to the Expert Review Panel (ERP) for Fertilizers for consideration of First Action Official Method(SM) status. The ERP evaluated the single-laboratory validation results and recommended the method for First Action Official Method status and provided recommendations for achieving Final Action. The 180 day soil incubation-column leaching technique was demonstrated to be a robust and reliable method for characterizing N release patterns from SRFs and CRFs. The method was reproducible, and the results were only slightly affected by variations in environmental factors such as microbial activity, soil moisture, temperature, and texture. The release of P and K were also studied, but at fewer replications than for N. Optimization experiments on the accelerated 74 h extraction method indicated that temperature was the only factor found to substantially influence nutrient-release rates from the materials studied, and an optimized extraction profile was established as follows: 2 h at 25°C, 2 h at 50°C, 20 h at 55°C, and 50 h at 60°C.

  5. The composite dynamic method as evidence for age-specific waterfowl mortality

    USGS Publications Warehouse

    Burnham, Kenneth P.; Anderson, David R.

    1979-01-01

    For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.

  6. Validity and reliability of Patient-Reported Outcomes Measurement Information System (PROMIS) Instruments in Osteoarthritis

    PubMed Central

    Broderick, Joan E.; Schneider, Stefan; Junghaenel, Doerte U.; Schwartz, Joseph E.; Stone, Arthur A.

    2013-01-01

    Objective Evaluation of known group validity, ecological validity, and test-retest reliability of four domain instruments from the Patient Reported Outcomes Measurement System (PROMIS) in osteoarthritis (OA) patients. Methods Recruitment of an osteoarthritis sample and a comparison general population (GP) through an Internet survey panel. Pain intensity, pain interference, physical functioning, and fatigue were assessed for 4 consecutive weeks with PROMIS short forms on a daily basis and compared with same-domain Computer Adaptive Test (CAT) instruments that use a 7-day recall. Known group validity (comparison of OA and GP), ecological validity (comparison of aggregated daily measures with CATs), and test-retest reliability were evaluated. Results The recruited samples matched (age, sex, race, ethnicity) the demographic characteristics of the U.S. sample for arthritis and the 2009 Census for the GP. Compliance with repeated measurements was excellent: > 95%. Known group validity for CATs was demonstrated with large effect sizes (pain intensity: 1.42, pain interference: 1.25, and fatigue: .85). Ecological validity was also established through high correlations between aggregated daily measures and weekly CATs (≥ .86). Test-retest validity (7-day) was very good (≥ .80). Conclusion PROMIS CAT instruments demonstrated known group and ecological validity in a comparison of osteoarthritis patients with a general population sample. Adequate test-retest reliability was also observed. These data provide encouraging initial data on the utility of these PROMIS instruments for clinical and research outcomes in osteoarthritis patients. PMID:23592494

  7. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  8. Formulation of the relativistic moment implicit particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noguchi, Koichi; Tronci, Cesare; Zuccaro, Gianluca

    2007-04-15

    A new formulation is presented for the implicit moment method applied to the time-dependent relativistic Vlasov-Maxwell system. The new approach is based on a specific formulation of the implicit moment method that allows us to retain the same formalism that is valid in the classical case despite the formidable complication introduced by the nonlinear nature of the relativistic equations of motion. To demonstrate the validity of the new formulation, an implicit finite difference algorithm is developed to solve the Maxwell's equations and equations of motion. A number of benchmark problems are run: two stream instability, ion acoustic wave damping, Weibelmore » instability, and Poynting flux acceleration. The numerical results are all in agreement with analytical solutions.« less

  9. Computation of leaky guided waves dispersion spectrum using vibroacoustic analyses and the Matrix Pencil Method: a validation study for immersed rectangular waveguides.

    PubMed

    Mazzotti, M; Bartoli, I; Castellazzi, G; Marzani, A

    2014-09-01

    The paper aims at validating a recently proposed Semi Analytical Finite Element (SAFE) formulation coupled with a 2.5D Boundary Element Method (2.5D BEM) for the extraction of dispersion data in immersed waveguides of generic cross-section. To this end, three-dimensional vibroacoustic analyses are carried out on two waveguides of square and rectangular cross-section immersed in water using the commercial Finite Element software Abaqus/Explicit. Real wavenumber and attenuation dispersive data are extracted by means of a modified Matrix Pencil Method. It is demonstrated that the results obtained using the two techniques are in very good agreement. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Verification and Validation of a Coordinate Transformation Method in Axisymmetric Transient Magnetics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.

    We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less

  11. Development, Validation, and Application of a Novel Ligand-Binding Assay to Selectively Measure PEGylated Recombinant Human Coagulation Factor VIII (BAX 855).

    PubMed

    Weber, Alfred; Engelmaier, Andrea; Hainzelmayer, Sandra; Minibeck, Eva; Anderle, Heinz; Schwarz, Hans Peter; Turecek, Peter L

    2015-10-21

    BAX 855 is a PEGylated recombinant factor VIII preparation that showed prolonged circulatory half-life in nonclinical and clinical studies. This paper describes the development, validation, and application of a novel ligand-binding assay (LBA) to selectively measure BAX 855 in plasma. The LBA is based on PEG-specific capture of BAX 855, followed by immunological factor VIII (FVIII)-specific detection of the antibody-bound BAX 855. This assay principle enabled sensitive measurement of BAX 855 down to the low nanomolar range without interference from non-PEGylated FVIII as demonstrated by validation data for plasma from animals typically used for nonclinical characterization of FVIII. The selectivity of an in-house-developed anti-PEG and a commercially available preparation, shown by competition studies to primarily target the terminating methoxy group of PEG, also allowed assessment of the intactness of the attached PEG chains. Altogether, this new LBA adds to the group of methods to selectively, accurately, and precisely measure a PEGylated drug in complex biological matrices. The feasibility and convenience of using this method was demonstrated during extensive nonclinical characterization of BAX 855.

  12. Rotational control of computer generated holograms.

    PubMed

    Preece, Daryl; Rubinsztein-Dunlop, Halina

    2017-11-15

    We develop a basis for three-dimensional rotation of arbitrary light fields created by computer generated holograms. By adding an extra phase function into the kinoform, any light field or holographic image can be tilted in the focal plane with minimized distortion. We present two different approaches to rotate an arbitrary hologram: the Scheimpflug method and a novel coordinate transformation method. Experimental results are presented to demonstrate the validity of both proposed methods.

  13. Development of a point-kinetic verification scheme for nuclear reactor applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.

    In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less

  14. Climate change vulnerability for species-Assessing the assessments.

    PubMed

    Wheatley, Christopher J; Beale, Colin M; Bradbury, Richard B; Pearce-Higgins, James W; Critchlow, Rob; Thomas, Chris D

    2017-09-01

    Climate change vulnerability assessments are commonly used to identify species at risk from global climate change, but the wide range of methodologies available makes it difficult for end users, such as conservation practitioners or policymakers, to decide which method to use as a basis for decision-making. In this study, we evaluate whether different assessments consistently assign species to the same risk categories and whether any of the existing methodologies perform well at identifying climate-threatened species. We compare the outputs of 12 climate change vulnerability assessment methodologies, using both real and simulated species, and validate the methods using historic data for British birds and butterflies (i.e. using historical data to assign risks and more recent data for validation). Our results show that the different vulnerability assessment methods are not consistent with one another; different risk categories are assigned for both the real and simulated sets of species. Validation of the different vulnerability assessments suggests that methods incorporating historic trend data into the assessment perform best at predicting distribution trends in subsequent time periods. This study demonstrates that climate change vulnerability assessments should not be used interchangeably due to the poor overall agreement between methods when considering the same species. The results of our validation provide more support for the use of trend-based rather than purely trait-based approaches, although further validation will be required as data become available. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  15. A High-Order Method Using Unstructured Grids for the Aeroacoustic Analysis of Realistic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Lockard, David P.

    1999-01-01

    A method for the prediction of acoustic scatter from complex geometries is presented. The discontinuous Galerkin method provides a framework for the development of a high-order method using unstructured grids. The method's compact form contributes to its accuracy and efficiency, and makes the method well suited for distributed memory parallel computing platforms. Mesh refinement studies are presented to validate the expected convergence properties of the method, and to establish the absolute levels of a error one can expect at a given level of resolution. For a two-dimensional shear layer instability wave and for three-dimensional wave propagation, the method is demonstrated to be insensitive to mesh smoothness. Simulations of scatter from a two-dimensional slat configuration and a three-dimensional blended-wing-body demonstrate the capability of the method to efficiently treat realistic geometries.

  16. Principles for valid histopathologic scoring in research

    PubMed Central

    Gibson-Corley, Katherine N.; Olivier, Alicia K.; Meyerholz, David K.

    2013-01-01

    Histopathologic scoring is a tool by which semi-quantitative data can be obtained from tissues. Initially, a thorough understanding of the experimental design, study objectives and methods are required to allow the pathologist to appropriately examine tissues and develop lesion scoring approaches. Many principles go into the development of a scoring system such as tissue examination, lesion identification, scoring definitions and consistency in interpretation. Masking (a.k.a. “blinding”) of the pathologist to experimental groups is often necessary to constrain bias and multiple mechanisms are available. Development of a tissue scoring system requires appreciation of the attributes and limitations of the data (e.g. nominal, ordinal, interval and ratio data) to be evaluated. Incidence, ordinal and rank methods of tissue scoring are demonstrated along with key principles for statistical analyses and reporting. Validation of a scoring system occurs through two principal measures: 1) validation of repeatability and 2) validation of tissue pathobiology. Understanding key principles of tissue scoring can help in the development and/or optimization of scoring systems so as to consistently yield meaningful and valid scoring data. PMID:23558974

  17. Psychometric Evaluation of the PSQI in U.S. College Students

    PubMed Central

    Dietch, Jessica R.; Taylor, Daniel J.; Sethi, Kevin; Kelly, Kimberly; Bramoweth, Adam D.; Roane, Brandy M.

    2016-01-01

    Study Objectives: Examine the psychometric properties of the PSQI in two U.S. college samples. Methods: Study I assessed convergent and divergent validity in 866 undergraduates who completed a sleep diary, PSQI, and other sleep and psychosocial measures. Study II assessed PSQI insomnia diagnostic accuracy in a separate sample of 147 healthy undergraduates with and without insomnia. Results: The PSQI global score had only moderate convergent validity with sleep diary sleep efficiency (prospective global measure of sleep continuity; r = 0.53), the Insomnia Severity Index (r = 0.63), and fatigue (r = 0.44). The PSQI global score demonstrated good divergent validity with measures of excessive daytime sleepiness (r = 0.18), circadian preference (r = −0.08), alcohol (r = 0.08) and marijuana (r = 0.05) abuse scales, and poor divergent validity with depression (r = 0.48), anxiety (r = 0.40), and perceived stress (r = 0.33). Examination of other analogous PSQI and sleep diary components showed low to moderate convergent validity: sleep latency (r = 0.70), wake after sleep onset (r = 0.37), sleep duration (r = 0.51), and sleep efficiency (r = −0.32). Diagnostic accuracy of the PSQI to detect insomnia was very high (area under the curve = 0.999). Sensitivity and specificity were maximized at a cutoff of 6. Conclusions: The PSQI demonstrated moderate convergent validity compared to measures of insomnia and fatigue and good divergent validity with measures of daytime sleepiness, circadian phase preference, and alcohol and marijuana use. The PSQI demonstrated considerable overlap with depression, anxiety, and perceived stress. Therefore, caution should be used with interpretation. Citation: Dietch JR, Taylor DJ, Sethi K, Kelly K, Bramoweth AD, Roane BM. Psychometric evaluation of the PSQI in U.S. college students. J Clin Sleep Med 2016;12(8):1121–1129. PMID:27166299

  18. Translation, Cross-cultural Adaptation and Psychometric Validation of the Korean-Language Cardiac Rehabilitation Barriers Scale (CRBS-K)

    PubMed Central

    2017-01-01

    Objective To perform a translation and cross-cultural adaptation of the Cardiac Rehabilitation Barriers Scale (CRBS) for use in Korea, followed by psychometric validation. The CRBS was developed to assess patients' perception of the degree to which patient, provider and health system-level barriers affect their cardiac rehabilitation (CR) participation. Methods The CRBS consists of 21 items (barriers to adherence) rated on a 5-point Likert scale. The first phase was to translate and cross-culturally adapt the CRBS to the Korean language. After back-translation, both versions were reviewed by a committee. The face validity was assessed in a sample of Korean patients (n=53) with history of acute myocardial infarction that did not participate in CR through semi-structured interviews. The second phase was to assess the construct and criterion validity of the Korean translation as well as internal reliability, through administration of the translated version in 104 patients, principle component analysis with varimax rotation and cross-referencing against CR use, respectively. Results The length, readability, and clarity of the questionnaire were rated well, demonstrating face validity. Analysis revealed a six-factor solution, demonstrating construct validity. Cronbach's alpha was greater than 0.65. Barriers rated highest included not knowing about CR and not being contacted by a program. The mean CRBS score was significantly higher among non-attendees (2.71±0.26) than CR attendees (2.51±0.18) (p<0.01). Conclusion The Korean version of CRBS has demonstrated face, content and criterion validity, suggesting it may be useful for assessing barriers to CR utilization in Korea. PMID:29201826

  19. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  20. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  1. A Conflict Management Scale for Pharmacy

    PubMed Central

    Gregory, Paul A.; Martin, Craig

    2009-01-01

    Objectives To develop and establish the validity and reliability of a conflict management scale specific to pharmacy practice and education. Methods A multistage inventory-item development process was undertaken involving 93 pharmacists and using a previously described explanatory model for conflict in pharmacy practice. A 19-item inventory was developed, field tested, and validated. Results The conflict management scale (CMS) demonstrated an acceptable degree of reliability and validity for use in educational or practice settings to promote self-reflection and self-awareness regarding individuals' conflict management styles. Conclusions The CMS provides a unique, pharmacy-specific method for individuals to determine and reflect upon their own conflict management styles. As part of an educational program to facilitate self-reflection and heighten self-awareness, the CMS may be a useful tool to promote discussions related to an important part of pharmacy practice. PMID:19960081

  2. Validity of Teacher Ratings in Selecting Influential Aggressive Adolescents for a Targeted Preventive Intervention

    PubMed Central

    Henry, David B.; Miller-Johnson, Shari; Simon, Thomas R.; Schoeny, Michael E.

    2009-01-01

    This study describes a method for using teacher nominations and ratings to identify socially influential, aggressive middle school students for participation in a targeted violence prevention intervention. The teacher nomination method is compared with peer nominations of aggression and influence to obtain validity evidence. Participants were urban, predominantly African American and Latino sixth-grade students who were involved in a pilot study for a large multi-site violence prevention project. Convergent validity was suggested by the high correlation of teacher ratings of peer influence and peer nominations of social influence. The teacher ratings of influence demonstrated acceptable sensitivity and specificity when predicting peer nominations of influence among the most aggressive children. Results are discussed m terms of the application of teacher nominations and ratings in large trials and full implementation of targeted prevention programs. PMID:16378226

  3. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  4. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  5. Analysis of Thiodiglycol: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS777

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for the analysis of thiodiglycol, the breakdown product of the sulfur mustard HD, in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS777 (hereafter referred to as EPA CRL SOP MS777). This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to verifymore » the analytical procedures described in MS777 for analysis of thiodiglycol in aqueous samples. The gathered data from this study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS777 can be determined.« less

  6. Application of Petri net theory for modelling and validation of the sucrose breakdown pathway in the potato tuber.

    PubMed

    Koch, Ina; Junker, Björn H; Heiner, Monika

    2005-04-01

    Because of the complexity of metabolic networks and their regulation, formal modelling is a useful method to improve the understanding of these systems. An essential step in network modelling is to validate the network model. Petri net theory provides algorithms and methods, which can be applied directly to metabolic network modelling and analysis in order to validate the model. The metabolism between sucrose and starch in the potato tuber is of great research interest. Even if the metabolism is one of the best studied in sink organs, it is not yet fully understood. We provide an approach for model validation of metabolic networks using Petri net theory, which we demonstrate for the sucrose breakdown pathway in the potato tuber. We start with hierarchical modelling of the metabolic network as a Petri net and continue with the analysis of qualitative properties of the network. The results characterize the net structure and give insights into the complex net behaviour.

  7. Nontechnical skill training and the use of scenarios in modern surgical education.

    PubMed

    Brunckhorst, Oliver; Khan, Muhammad S; Dasgupta, Prokar; Ahmed, Kamran

    2017-07-01

    Nontechnical skills are being increasingly recognized as a core reason of surgical errors. Combined with the changing nature of surgical training, there has therefore been an increase in nontechnical skill research in the literature. This review therefore aims to: define nontechnical skillsets, assess current training methods, explore assessment modalities and suggest future research aims. The literature demonstrates an increasing understanding of the components of nontechnical skills within surgery. This has led to a greater availability of validated training methods for its training, including the use of didactic teaching, e-learning and simulation-based scenarios. In addition, there are now various extensively validated assessment tools for nontechnical skills including NOTSS, the Oxford NOTECHS and OTAS. Finally, there is now more focus on the development of tools which target individual nontechnical skill components and an attempt to understand which of these play a greater role in specific procedures such as laparoscopic or robotic surgery. Current evidence demonstrates various training methods and tools for the training of nontechnical skills. Future research is likely to focus increasingly on individual nontechnical skill components and procedure-specific skills.

  8. An empirical assessment of validation practices for molecular classifiers

    PubMed Central

    Castaldi, Peter J.; Dahabreh, Issa J.

    2011-01-01

    Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697

  9. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  10. Towards optical spectroscopic anatomical mapping (OSAM) for lesion validation in cardiac tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Singh-Moon, Rajinder P.; Zaryab, Mohammad; Hendon, Christine P.

    2017-02-01

    Electroanatomical mapping (EAM) is an invaluable tool for guiding cardiac radiofrequency ablation (RFA) therapy. The principle roles of EAM is the identification of candidate ablation sites by detecting regions of abnormal electrogram activity and lesion validation subsequent to RF energy delivery. However, incomplete lesions may present interim electrical inactivity similar to effective treatment in the acute setting, despite efforts to reveal them with pacing or drugs, such as adenosine. Studies report that the misidentification and recovery of such lesions is a leading cause of arrhythmia recurrence and repeat procedures. In previous work, we demonstrated spectroscopic characterization of cardiac tissues using a fiber optic-integrated RF ablation catheter. In this work, we introduce OSAM (optical spectroscopic anatomical mapping), the application of this spectroscopic technique to obtain 2-dimensional biodistribution maps. We demonstrate its diagnostic potential as an auxiliary method for lesion validation in treated swine preparations. Endocardial lesion sets were created on fresh swine cardiac samples using a commercial RFA system. An optically-integrated catheter console fabricated in-house was used for measurement of tissue optical spectra between 600-1000nm. Three dimensional, Spatio-spectral datasets were generated by raster scanning of the optical catheter across the treated sample surface in the presence of whole blood. Tissue optical parameters were recovered at each spatial position using an inverse Monte Carlo method. OSAM biodistribution maps showed stark correspondence with gross examination of tetrazolium chloride stained tissue specimens. Specifically, we demonstrate the ability of OSAM to readily distinguish between shallow and deeper lesions, a limitation faced by current EAM techniques. These results showcase the OSAMs potential for lesion validation strategies for the treatment of cardiac arrhythmias.

  11. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  12. High Precision Optical Observations of Space Debris in the Geo Ring from Venezuela

    NASA Astrophysics Data System (ADS)

    Lacruz, E.; Abad, C.; Downes, J. J.; Casanova, D.; Tresaco, E.

    2018-01-01

    We present preliminary results to demonstrate that our method for detection and location of Space Debris (SD) in the geostationary Earth orbit (GEO) ring, based on observations at the OAN of Venezuela is of high astrometric precision. A detailed explanation of the method, its validation and first results is available in (Lacruz et al. 2017).

  13. Curriculum-Based Measurement of Reading: An Evaluation of Frequentist and Bayesian Methods to Model Progress Monitoring Data

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Desjardins, Christopher David

    2018-01-01

    Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR's lack of validity and reliability, and…

  14. The Arthroscopic Surgical Skill Evaluation Tool (ASSET).

    PubMed

    Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T

    2013-06-01

    Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.

  15. Evaluation of the Thermo Scientific SureTect Salmonella species assay. AOAC Performance Tested Method 051303.

    PubMed

    Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-01-01

    The Thermo Scientific SureTect Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  16. Development and validation of a streamlined method designed to detect residues of 62 veterinary drugs in bovine kidney using ultra-high performance liquid chromatography--tandem mass spectrometry.

    PubMed

    Lehotay, Steven J; Lightfield, Alan R; Geis-Asteggiante, Lucía; Schneider, Marilyn J; Dutko, Terry; Ng, Chilton; Bluhm, Louis; Mastovska, Katerina

    2012-08-01

    In the USA, the US Department of Agriculture's Food Safety and Inspection Service (FSIS) conducts the National Residue Program designed to monitor veterinary drug and other chemical residues in beef and other slaughtered food animals. Currently, FSIS uses a 7-plate bioassay in the laboratory to screen for antimicrobial drugs in bovine kidneys from those animals tested positive by inspectors in the slaughter establishments. The microbial inhibition bioassay has several limitations in terms of monitoring scope, sensitivity, selectivity, and analysis time. Ultra-high performance liquid chromatography - tandem mass spectrometry (UHPLC-MS/MS) has many advantages over the bioassay for this application, and this study was designed to develop, evaluate, and validate a fast UHPLC-MS/MS method for antibiotics and other high-priority veterinary drugs in bovine kidney. Five existing multi-class, multi-residue methods from the literature were tested and compared, and each performed similarly. Experiments with incurred samples demonstrated that a 5-min shake of 2 g homogenized kidney with 10 ml of 4/1 (v/v) acetonitrile/water followed by simultaneous clean-up of the initial extract with 0.5 g C18 and 10 ml hexane gave a fast, simple, and effective sample preparation method for the <10 min UHPLC-MS/MS analysis. An extensive 5-day validation process demonstrated that the final method could be used to acceptably screen for 54 of the 62 drugs tested, and 50 of those met qualitative MS identification criteria. Quantification was not needed in the application, but the method gave ≥ 70% recoveries and ≤ 25% reproducibilities for 30 of the drugs. Published 2012. This article is a U.S. Government work and is in the public domain of the USA.

  17. The Validity and Reliability Test of the Indonesian Version of Gastroesophageal Reflux Disease Quality of Life (GERD-QOL) Questionnaire.

    PubMed

    Siahaan, Laura A; Syam, Ari F; Simadibrata, Marcellus; Setiati, Siti

    2017-01-01

    to obtain a valid and reliable GERD-QOL questionnaire for Indonesian application. at the initial stage, the GERD-QOL questionnaire was first translated into Indonesian language and the translated questionnaire was subsequently translated back into the original language (back-to-back translation). The results were evaluated by the researcher team and therefore, an Indonesian version of GERD-QOL questionnaire was developed. Ninety-one patients who had been clinically diagnosed with GERD based on the Montreal criteria were interviewed using the Indonesian version of GERD-QOL questionnaire and the SF 36 questionnaire. The validity was evaluated using a method of construct validity and external validity, and reliability can be tested by the method of internal consistency and test retest. the Indonesian version of GERD-QOL questionnaire had a good internal consistency reliability with a Cronbach Alpha of 0.687-0.842 and a good test retest reliability with an intra-class correlation coefficient of 0.756-0.936; p<0.05). The questionnaire had also been demonstrated to have a good validity with a proven high correlation to each question of SF-36 (p<0.05). the Indonesian version of GERD-QOL questionnaire has been proven valid and reliable to evaluate the quality of life of GERD patients.

  18. The Predictive Validity of the Short-Term Assessment of Risk and Treatability (START) for Multiple Adverse Outcomes in a Secure Psychiatric Inpatient Setting.

    PubMed

    O'Shea, Laura E; Picchioni, Marco M; Dickens, Geoffrey L

    2016-04-01

    The Short-Term Assessment of Risk and Treatability (START) aims to assist mental health practitioners to estimate an individual's short-term risk for a range of adverse outcomes via structured consideration of their risk ("Vulnerabilities") and protective factors ("Strengths") in 20 areas. It has demonstrated predictive validity for aggression but this is less established for other outcomes. We collated START assessments for N = 200 adults in a secure mental health hospital and ascertained 3-month risk event incidence using the START Outcomes Scale. The specific risk estimates, which are the tool developers' suggested method of overall assessment, predicted aggression, self-harm/suicidality, and victimization, and had incremental validity over the Strength and Vulnerability scales for these outcomes. The Strength scale had incremental validity over the Vulnerability scale for aggressive outcomes; therefore, consideration of protective factors had demonstrable value in their prediction. Further evidence is required to support use of the START for the full range of outcomes it aims to predict. © The Author(s) 2015.

  19. Spontaneous Swallow Frequency Compared with Clinical Screening in the Identification of Dysphagia in Acute Stroke

    PubMed Central

    Crary, Michael A.; Carnaby, Giselle D.; Sia, Isaac

    2017-01-01

    Background The aim of this study was to compare spontaneous swallow frequency analysis (SFA) with clinical screening protocols for identification of dysphagia in acute stroke. Methods In all, 62 patients with acute stroke were evaluated for spontaneous swallow frequency rates using a validated acoustic analysis technique. Independent of SFA, these same patients received a routine nurse-administered clinical dysphagia screening as part of standard stroke care. Both screening tools were compared against a validated clinical assessment of dysphagia for acute stroke. In addition, psychometric properties of SFA were compared against published, validated clinical screening protocols. Results Spontaneous SFA differentiates patients with versus without dysphagia after acute stroke. Using a previously identified cut point based on swallows per minute, spontaneous SFA demonstrated superior ability to identify dysphagia cases compared with a nurse-administered clinical screening tool. In addition, spontaneous SFA demonstrated equal or superior psychometric properties to 4 validated, published clinical dysphagia screening tools. Conclusions Spontaneous SFA has high potential to identify dysphagia in acute stroke with psychometric properties equal or superior to clinical screening protocols. PMID:25088166

  20. Simultaneous Determination of Potassium Sorbate, Sodium Benzoate, Quinoline Yellow and Sunset Yellow in Lemonades and Lemon Sauces by HPLC Using Experimental Design.

    PubMed

    Dinç Zor, Şule; Aşçı, Bürge; Aksu Dönmez, Özlem; Yıldırım Küçükkaraca, Dilek

    2016-07-01

    In this study, development and validation of a HPLC method was described for simultaneous determination of potassium sorbate, sodium benzoate, quinoline yellow and sunset yellow. A Box-Behnken design using three variables at three levels was employed to determine the optimum conditions of chromatographic separation: pH of mobile phase, 6.0-7.0; flow rate, 0.8-1.2 mL min(-1) and the ratio of mobile phase composed of a 0.025 M sodium acetate/acetic acid buffer, 80-90%. Resolution was chosen as a response. The optimized method was validated for linearity, the limits of detection and quantification, accuracy, precision and stability. All the validation parameters were within the acceptance range. The applicability of the developed method to the determination of these food additives in commercial lemonade and lemon sauce samples was successfully demonstrated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  2. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  3. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    PubMed

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  4. Ambulant adults with spastic cerebral palsy: the validity of lower limb joint angle measurements from sagittal video recordings.

    PubMed

    Larsen, Kerstin L; Maanum, Grethe; Frøslie, Kathrine F; Jahnsen, Reidun

    2012-02-01

    In the development of a clinical program for ambulant adults with cerebral palsy (CP), we investigated the validity of joint angles measured from sagittal video recordings and explored if movements in the transversal plane identified with three-dimensional gait analysis (3DGA) affected the validity of sagittal video joint angle measurements. Ten observers, and 10 persons with spastic CP (19-63 years), Gross Motor Function Classification System I-II, participated in the study. Concurrent criterion validity between video joint angle measurements and 3DGA was assessed by Bland-Altman plots with mean differences and 95% limits of agreement (LoA). Pearson's correlation coefficients (r) and scatter plots were used supplementary. Transversal kinematics ≥2 SD from our reference band were defined as increased movement in the transversal plane. The overall mean differences in degrees between joint angles measured by 3DGA and video recordings (3°, 5° and -7° for the hip, knee and ankle respectively) and corresponding LoA (18°, 10° and 15° for the hip, knee and ankle, respectively) demonstrated substantial discrepancies between the two methods. The correlations ranged from low (r=0.39) to moderate (r=0.68). Discrepancy between the two measurements was seen both among persons with and without the presence of deviating transversal kinematics. Quantifying lower limb joint angles from sagittal video recordings in ambulant adults with spastic CP demonstrated low validity, and should be conducted with caution. This gives implications for selecting evaluation method of gait. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Identifying Wrist Fracture Patients with High Accuracy by Automatic Categorization of X-ray Reports

    PubMed Central

    de Bruijn, Berry; Cranney, Ann; O’Donnell, Siobhan; Martin, Joel D.; Forster, Alan J.

    2006-01-01

    The authors performed this study to determine the accuracy of several text classification methods to categorize wrist x-ray reports. We randomly sampled 751 textual wrist x-ray reports. Two expert reviewers rated the presence (n = 301) or absence (n = 450) of an acute fracture of wrist. We developed two information retrieval (IR) text classification methods and a machine learning method using a support vector machine (TC-1). In cross-validation on the derivation set (n = 493), TC-1 outperformed the two IR based methods and six benchmark classifiers, including Naive Bayes and a Neural Network. In the validation set (n = 258), TC-1 demonstrated consistent performance with 93.8% accuracy; 95.5% sensitivity; 92.9% specificity; and 87.5% positive predictive value. TC-1 was easy to implement and superior in performance to the other classification methods. PMID:16929046

  6. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  7. Development and Validation of a Multiplexed Protein Quantitation Assay for the Determination of Three Recombinant Proteins in Soybean Tissues by Liquid Chromatography with Tandem Mass Spectrometry.

    PubMed

    Hill, Ryan C; Oman, Trent J; Shan, Guomin; Schafer, Barry; Eble, Julie; Chen, Cynthia

    2015-08-26

    Currently, traditional immunochemistry technologies such as enzyme-linked immunosorbent assays (ELISA) are the predominant analytical tool used to measure levels of recombinant proteins expressed in genetically engineered (GE) plants. Recent advances in agricultural biotechnology have created a need to develop methods capable of selectively detecting and quantifying multiple proteins in complex matrices because of increasing numbers of transgenic proteins being coexpressed or "stacked" to achieve tolerance to multiple herbicides or to provide multiple modes of action for insect control. A multiplexing analytical method utilizing liquid chromatography with tandem mass spectrometry (LC-MS/MS) has been developed and validated to quantify three herbicide-tolerant proteins in soybean tissues: aryloxyalkanoate dioxygenase (AAD-12), 5-enol-pyruvylshikimate-3-phosphate synthase (2mEPSPS), and phosphinothricin acetyltransferase (PAT). Results from the validation showed high recovery and precision over multiple analysts and laboratories. Results from this method were comparable to those obtained with ELISA with respect to protein quantitation, and the described method was demonstrated to be suitable for multiplex quantitation of transgenic proteins in GE crops.

  8. Evaluation of Informed Choice for contraceptive methods among women attending a family planning program: conceptual development; a case study in Chile.

    PubMed

    Valdés, Patricio R; Alarcon, Ana M; Munoz, Sergio R

    2013-03-01

    To generate and validate a scale to measure the Informed Choice of contraceptive methods among women attending a family health care service in Chile. The study follows a multimethod design that combined expert opinions from 13 physicians, 3 focus groups of 21 women each, and a sample survey of 1,446 women. Data analysis consisted of a qualitative text analysis of group interviews, a factor analysis for construct validity, and kappa statistic and Cronbach alpha to assess scale reliability. The instrument comprises 25 items grouped into six categories: information and orientation, quality of treatment, communication, participation in decision making, expression of reproductive rights, and method access and availability. Internal consistency measured with Cronbach alpha ranged from 0.75 to 0.89 for all subscales (kappa, 0.62; standard deviation, 0.06), and construct validity was demonstrated from the testing of several hypotheses. The use of mixed methods contributed to developing a scale of Informed Choice that was culturally appropriate for assessing the women who participated in the family planning program. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Generalizing disease management program results: how to get from here to there.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-07-01

    For a disease management (DM) program, the ability to generalize results from the intervention group to the population, to other populations, or to other diseases is as important as demonstrating internal validity. This article provides an overview of the threats to external validity of DM programs, and offers methods to improve the capability for generalizing results obtained through the program. The external validity of DM programs must be evaluated even before program selection and implementation are begun with a prospective new client. Any fundamental differences in characteristics between individuals in an established DM program and in a new population/environment may limit the ability to generalize.

  10. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Validation of QuEChERS analytical technique for organochlorines and synthetic pyrethroids in fruits and vegetables using GC-ECD.

    PubMed

    Dubey, J K; Patyal, S K; Sharma, Ajay

    2018-03-19

    In the present day scenario of increasing awareness and concern about the pesticides, it is very important to ensure the quality of data being generated in pesticide residue analysis. To impart confidence in the products, terms like quality assurance and quality control are used as an integral part of quality management. In order to ensure better quality of results in pesticide residue analysis, validation of analytical methods to be used is extremely important. Keeping in view the importance of validation of method, the validation of QuEChERS (quick, easy, cheap, effective, rugged, and safe) a multiresidue method for extraction of 13 organochlorines and seven synthetic pyrethroids in fruits and vegetables followed by GC ECD for quantification was done so as to use this method for analysis of samples received in the laboratory. The method has been validated as per the Guidelines issued by SANCO (French words Sante for Health and Consommateurs for Consumers) in accordance with their document SANCO/XXXX/2013. Various parameters analyzed, viz., linearity, specificity, repeatability, reproducibility, and ruggedness were found to have acceptable values with a per cent RSD of less than 10%. Limit of quantification (LOQ) for the organochlorines was established to be 0.01 and 0.05 mg kg -1 for the synthetic pyrethroids. The uncertainty of the measurement (MU) for all these compounds ranged between 1 and 10%. The matrix-match calibration was used to compensate the matrix effect on the quantification of the compounds. The overall recovery of the method ranged between 80 and 120%. These results demonstrate the applicability and acceptability of this method in routine estimation of pesticide residues of these 20 pesticides in the fruits and vegetables by the laboratory.

  12. Recent Work in Hybrid Radiation Transport Methods with Applications to Commercial Nuclear Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, Joel A.

    This talk will begin with an overview of hybrid radiation transport methods followed by a discussion of the author’s work to advance current capabilities. The talk will then describe applications for these methods in commercial nuclear power reactor analyses and techniques for experimental validation. When discussing these analytical and experimental activities, the importance of technical standards such as those created and maintained by ASTM International will be demonstrated.

  13. Reliability and Validity of a New Method for Isometric Back Extensor Strength Evaluation Using A Hand-Held Dynamometer.

    PubMed

    Park, Hee-Won; Baek, Sora; Kim, Hong Young; Park, Jung-Gyoo; Kang, Eun Kyoung

    2017-10-01

    To investigate the reliability and validity of a new method for isometric back extensor strength measurement using a portable dynamometer. A chair equipped with a small portable dynamometer was designed (Power Track II Commander Muscle Tester). A total of 15 men (mean age, 34.8±7.5 years) and 15 women (mean age, 33.1±5.5 years) with no current back problems or previous history of back surgery were recruited. Subjects were asked to push the back of the chair while seated, and their isometric back extensor strength was measured by the portable dynamometer. Test-retest reliability was assessed with intraclass correlation coefficient (ICC). For the validity assessment, isometric back extensor strength of all subjects was measured by a widely used physical performance evaluation instrument, BTE PrimusRS system. The limit of agreement (LoA) from the Bland-Altman plot was evaluated between two methods. The test-retest reliability was excellent (ICC=0.82; 95% confidence interval, 0.65-0.91). The Bland-Altman plots demonstrated acceptable agreement between the two methods: the lower 95% LoA was -63.1 N and the upper 95% LoA was 61.1 N. This study shows that isometric back extensor strength measurement using a portable dynamometer has good reliability and validity.

  14. Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang

    2017-01-01

    This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.

  15. Vacuum decay container closure integrity leak test method development and validation for a lyophilized product-package system.

    PubMed

    Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton

    2011-01-01

    A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.

  16. Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning.

    PubMed

    Davidson, Benjamin; Kalitzeos, Angelos; Carroll, Joseph; Dubra, Alfredo; Ourselin, Sebastien; Michaelides, Michel; Bergeles, Christos

    2018-05-21

    We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method's robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.

  17. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  18. Psychometric Assessment of the Chinese Version of the Supportive Care Needs Survey Short-Form (SCNS-SF34-C) among Hong Kong and Taiwanese Chinese Colorectal Cancer Patients

    PubMed Central

    Li, Wylie Wai Yee; Lam, Wendy Wing Tak; Shun, Shiow-Ching; Lai, Yeur-Hur; Law, Wai-Lun; Poon, Jensen; Fielding, Richard

    2013-01-01

    Background Accurate assessment of unmet supportive care needs is essential for optimal cancer patient care. This study used confirmatory factor analysis (CFA) to test the known factor structures of the short form of Supportive Care Need Survey (SCNS-34) in Hong Kong and Taiwan Chinese patients diagnosed with colorectal cancer (CRC). Methods 360 Hong Kong and 263 Taiwanese Chinese CRC patients completed the Chinese version of SCNS-SF34. Comparative measures (patient satisfaction, anxiety, depression, and symptom distress) tested convergent validity while known group differences were examined to test discriminant validity. Results The original 5-factor and recent 4-factor models of the SCNS demonstrated poor data fit using CFA in both Hong Kong and Taiwan samples. Subsequently a modified five-factor model with correlated residuals demonstrated acceptable fit in both samples. Correlations demonstrated convergent and divergent validity and known group differences were observed. Conclusions While the five-factor model demonstrated a better fit for data from Chinese colorectal cancer patients, some of the items within its domain overlapped, suggesting item redundancy. The five-factor model showed good psychometric properties in these samples but also suggests conceptualization of unmet supportive care needs are currently inadequate. PMID:24146774

  19. Reliability and Validity of a New Method for Isometric Back Extensor Strength Evaluation Using A Hand-Held Dynamometer

    PubMed Central

    2017-01-01

    Objective To investigate the reliability and validity of a new method for isometric back extensor strength measurement using a portable dynamometer. Methods A chair equipped with a small portable dynamometer was designed (Power Track II Commander Muscle Tester). A total of 15 men (mean age, 34.8±7.5 years) and 15 women (mean age, 33.1±5.5 years) with no current back problems or previous history of back surgery were recruited. Subjects were asked to push the back of the chair while seated, and their isometric back extensor strength was measured by the portable dynamometer. Test-retest reliability was assessed with intraclass correlation coefficient (ICC). For the validity assessment, isometric back extensor strength of all subjects was measured by a widely used physical performance evaluation instrument, BTE PrimusRS system. The limit of agreement (LoA) from the Bland-Altman plot was evaluated between two methods. Results The test-retest reliability was excellent (ICC=0.82; 95% confidence interval, 0.65–0.91). The Bland-Altman plots demonstrated acceptable agreement between the two methods: the lower 95% LoA was −63.1 N and the upper 95% LoA was 61.1 N. Conclusion This study shows that isometric back extensor strength measurement using a portable dynamometer has good reliability and validity. PMID:29201818

  20. Advanced forensic validation for human spermatozoa identification using SPERM HY-LITER™ Express with quantitative image analysis.

    PubMed

    Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko

    2017-07-01

    Identification of human semen is indispensable for the investigation of sexual assaults. Fluorescence staining methods using commercial kits, such as the series of SPERM HY-LITER™ kits, have been useful to detect human sperm via strong fluorescence. These kits have been examined from various forensic aspects. However, because of a lack of evaluation methods, these studies did not provide objective, or quantitative, descriptions of the results nor clear criteria for the decisions reached. In addition, the variety of validations was considerably limited. In this study, we conducted more advanced validations of SPERM HY-LITER™ Express using our established image analysis method. Use of this method enabled objective and specific identification of fluorescent sperm's spots and quantitative comparisons of the sperm detection performance under complex experimental conditions. For body fluid mixtures, we examined interference with the fluorescence staining from other body fluid components. Effects of sample decomposition were simulated in high humidity and high temperature conditions. Semen with quite low sperm concentrations, such as azoospermia and oligospermia samples, represented the most challenging cases in application of the kit. Finally, the tolerance of the kit against various acidic and basic environments was analyzed. The validations herein provide useful information for the practical applications of the SPERM HY-LITER™ Express kit, which were previously unobtainable. Moreover, the versatility of our image analysis method toward various complex cases was demonstrated.

  1. Validity and reliability of the session-RPE method for quantifying training in Australian football: a comparison of the CR10 and CR100 scales.

    PubMed

    Scott, Tannath J; Black, Cameron R; Quinn, John; Coutts, Aaron J

    2013-01-01

    The purpose of this study was to examine and compare the criterion validity and test-retest reliability of the CR10 and CR100 rating of perceived exertion (RPE) scales for team sport athletes that undertake high-intensity, intermittent exercise. Twenty-one male Australian football (AF) players (age: 19.0 ± 1.8 years, body mass: 83.92 ± 7.88 kg) participated the first part (part A) of this study, which examined the construct validity of the session-RPE (sRPE) method for quantifying training load in AF. Ten male athletes (age: 16.1 ± 0.5 years) participated in the second part of the study (part B), which compared the test-retest reliability of the CR10 and CR100 RPE scales. In part A, the validity of the sRPE method was assessed by examining the relationships between sRPE, and objective measures of internal (i.e., heart rate) and external training load (i.e., distance traveled), collected from AF training sessions. Part B of the study assessed the reliability of sRPE through examining the test-retest reliability of sRPE during 3 different intensities of controlled intermittent running (10, 11.5, and 13 km·h(-1)). Results from part A demonstrated strong correlations for CR10- and CR100-derived sRPE with measures of internal training load (Banisters TRIMP and Edwards TRIMP) (CR10: r = 0.83 and 0.83, and CR100: r = 0.80 and 0.81, p < 0.05). Correlations between sRPE and external training load (distance, higher speed running and player load) for both the CR10 (r = 0.81, 0.71, and 0.83) and CR100 (r = 0.78, 0.69, and 0.80) were significant (p < 0.05). Results from part B demonstrated poor reliability for both the CR10 (31.9% CV) and CR100 (38.6% CV) RPE scales after short bouts of intermittent running. Collectively, these results suggest both CR10- and CR100-derived sRPE methods have good construct validity for assessing training load in AF. The poor levels of reliability revealed under field testing indicate that the sRPE method may not be sensible to detecting small changes in exercise intensity during brief intermittent running bouts. Despite this limitation, the sRPE remains a valid method to quantify training loads in high-intensity, intermittent team sport.

  2. A homotopy analysis method for the nonlinear partial differential equations arising in engineering

    NASA Astrophysics Data System (ADS)

    Hariharan, G.

    2017-05-01

    In this article, we have established the homotopy analysis method (HAM) for solving a few partial differential equations arising in engineering. This technique provides the solutions in rapid convergence series with computable terms for the problems with high degree of nonlinear terms appearing in the governing differential equations. The convergence analysis of the proposed method is also discussed. Finally, we have given some illustrative examples to demonstrate the validity and applicability of the proposed method.

  3. The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations.

    PubMed

    Ghanbari, Behzad

    2014-01-01

    We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.

  4. Ligand interaction scan: a general method for engineering ligand-sensitive protein alleles.

    PubMed

    Erster, Oran; Eisenstein, Miriam; Liscovitch, Mordechai

    2007-05-01

    The ligand interaction scan (LIScan) method is a general procedure for engineering small molecule ligand-regulated forms of a protein that is complementary to other 'reverse' genetic and chemical-genetic methods for drug-target validation. It involves insertional mutagenesis by a chemical-genetic 'switch', comprising a genetically encoded peptide module that binds with high affinity to a small-molecule ligand. We demonstrated the method with TEM-1 beta-lactamase, using a tetracysteine hexapeptide insert and a biarsenical fluorescein ligand (FlAsH).

  5. Quantitative nondestructive evaluation of ceramic matrix composite by the resonance method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Aizawa, T.; Kihara, J.

    The resonance method was developed to make quantitative nondestructive evaluation on the mechanical properties without any troublesome procedure. Since the present method is indifferent to the geometry of specimen, both monolithic and ceramic matrix composite materials in process can be evaluated in the nondestructive manner. Al{sub 2}O{sub 3}, Si{sub 3}N{sub 4}, SiC/Si{sub 3}N{sub 4}, and various C/C composite materials are employed to demonstrate the validity and effectiveness of the present method.

  6. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  7. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  8. Feasibility study on inverse four-dimensional dose reconstruction using the continuous dose-image of EPID

    PubMed Central

    Yeo, Inhwan Jason; Jung, Jae Won; Yi, Byong Yong; Kim, Jong Oh

    2013-01-01

    Purpose: When an intensity-modulated radiation beam is delivered to a moving target, the interplay effect between dynamic beam delivery and the target motion due to miss-synchronization can cause unpredictable dose delivery. The portal dose image in electronic portal imaging device (EPID) represents radiation attenuated and scattered through target media. Thus, it may possess information about delivered radiation to the target. Using a continuous scan (cine) mode of EPID, which provides temporal dose images related to target and beam movements, the authors’ goal is to perform four-dimensional (4D) dose reconstruction. Methods: To evaluate this hypothesis, first, the authors have derived and subsequently validated a fast method of dose reconstruction based on virtual beamlet calculations of dose responses using a test intensity-modulated beam. This method was necessary for processing a large number of EPID images pertinent for four-dimensional reconstruction. Second, cine mode acquisition after summation over all images was validated through comparison with integration mode acquisition on EPID (IAS3 and aS1000) for the test beam. This was to confirm the agreement of the cine mode with the integrated mode, specifically for the test beam, which is an accepted mode of image acquisition for dosimetry with EPID. Third, in-phantom film and exit EPID dosimetry was performed on a moving platform using the same beam. Heterogeneous as well as homogeneous phantoms were used. The cine images were temporally sorted at 10% interval. The authors have performed dose reconstruction to the in-phantom plane from the sorted cine images using the above validated method of dose reconstruction. The reconstructed dose from each cine image was summed to compose a total reconstructed dose from the test beam delivery, and was compared with film measurements. Results: The new method of dose reconstruction was validated showing greater than 95.3% pass rates of the gamma test with the criteria of dose difference of 3% and distance to agreement of 3 mm. The dose comparison of the reconstructed dose with the measured dose for the two phantoms showed pass rates higher than 96.4% given the same criteria. Conclusions: Feasibility of 4D dose reconstruction was successfully demonstrated in this study. The 4D dose reconstruction demonstrated in this study can be a promising dose validation method for radiation delivery on moving organs. PMID:23635250

  9. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  10. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  11. Transferability and inter-laboratory variability assessment of the in vitro bovine oocyte fertilization test.

    PubMed

    Tessaro, Irene; Modina, Silvia C; Crotti, Gabriella; Franciosi, Federica; Colleoni, Silvia; Lodde, Valentina; Galli, Cesare; Lazzari, Giovanna; Luciano, Alberto M

    2015-01-01

    The dramatic increase in the number of animals required for reproductive toxicity testing imposes the validation of alternative methods to reduce the use of laboratory animals. As we previously demonstrated for in vitro maturation test of bovine oocytes, the present study describes the transferability assessment and the inter-laboratory variability of an in vitro test able to identify chemical effects during the process of bovine oocyte fertilization. Eight chemicals with well-known toxic properties (benzo[a]pyrene, busulfan, cadmium chloride, cycloheximide, diethylstilbestrol, ketoconazole, methylacetoacetate, mifepristone/RU-486) were tested in two well-trained laboratories. The statistical analysis demonstrated no differences in the EC50 values for each chemical in within (inter-runs) and in between-laboratory variability of the proposed test. We therefore conclude that the bovine in vitro fertilization test could advance toward the validation process as alternative in vitro method and become part of an integrated testing strategy in order to predict chemical hazards on mammalian fertility. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Identifying areas with vitamin A deficiency: the validity of a semiquantitative food frequency method.

    PubMed

    Sloan, N L; Rosen, D; de la Paz, T; Arita, M; Temalilwa, C; Solomons, N W

    1997-02-01

    The prevalence of vitamin A deficiency has traditionally been assessed through xerophthalmia or biochemical surveys. The cost and complexity of implementing these methods limits the ability of nonresearch organizations to identify vitamin A deficiency. This study examined the validity of a simple, inexpensive food frequency method to identify areas with a high prevalence of vitamin A deficiency. The validity of the method was tested in 15 communities, 5 each from the Philippines, Guatemala, and Tanzania. Serum retinol concentrations of less than 20 micrograms/dL defined vitamin A deficiency. Weighted measures of vitamin A intake six or fewer times per week and unweighted measures of consumption of animal sources of vitamin A four or fewer times per week correctly classified seven of eight communities as having a high prevalence of vitamin A deficiency (i.e., 15% or more preschool-aged children in the community had the deficiency) (sensitivity = 87.5%) and four of seven communities as having a low prevalence (specificity = 57.1%). This method correctly classified the vitamin A deficiency status of 73.3% of the communities but demonstrated a high false-positive rate (42.9%).

  13. Determination of calcium, magnesium, sodium, and potassium in foodstuffs by using a microsampling flame atomic absorption spectrometric method after closed-vessel microwave digestion: method validation.

    PubMed

    Chekri, Rachida; Noël, Laurent; Vastel, Christelle; Millour, Sandrine; Kadar, Ali; Guérin, Thierry

    2010-01-01

    This paper describes a validation process in compliance with the NFIEN ISO/IEC 17025 standard for the determination of the macrominerals calcium, magnesium, sodium, and potassium in foodstuffs by microsampling with flame atomic absorption spectrometry after closed-vessel microwave digestion. The French Standards Commission (Agence Francaise de Normalisation) standards NF V03-110, NF EN V03-115, and XP T-90-210 were used to evaluate this method. The method was validated in the context of an analysis of the 1322 food samples of the second French Total Diet Study (TDS). Several performance criteria (linearity, LOQ, specificity, trueness, precision under repeatability conditions, and intermediate precision reproducibility) were evaluated. Furthermore, the method was monitored by several internal quality controls. The LOQ values obtained (25, 5, 8.3, and 8.3 mg/kg for Ca, Mg, Na, and K, respectively) were in compliance with the needs of the TDS. The method provided accurate results as demonstrated by a repeatability CV (CVr) of < 7% and a reproducibility CV (CVR) of < 12% for all the elements. Therefore, the results indicated that this method could be used in the laboratory for the routine determination of these four elements in foodstuffs with acceptable analytical performance.

  14. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  15. Development and validation of a fast static headspace GC method for determination of residual solvents in permethrin.

    PubMed

    Tian, Jingzhi; Rustum, Abu

    2016-09-05

    A fast static headspace gas chromatography (HS-GC) method was developed to separate all residual solvents present in commercial active pharmaceutical ingredient (API) batches of permethrin. A total of six residual solvents namely 2-methylpentane, 3-methylpentane, methylcyclopentane, n-hexane, cyclohexane and toluene were found in typical commercial batches of permethrin; and three of them are not in the list of ICH solvents. All six residual solvents were baseline separated in five minutes by the new method presented in this paper. The method was successfully validated as per International Conference on Harmonisation (ICH) guidelines. Evaluation of this method was conducted to separate 26 commonly used solvents in the manufacturing of various APIs, key intermediates of APIs and pharmaceutical excipients. The results of the evaluation demonstrated that this method can also be used as a general method to determine residual solvents in various APIs, intermediates and excipients that are used in pharmaceutical products. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  17. Validation and evaluation of the advanced aeronautical CFD system SAUNA: A method developer's view

    NASA Astrophysics Data System (ADS)

    Shaw, J. A.; Peace, A. J.; Georgala, J. M.; Childs, P. N.

    1993-09-01

    This paper is concerned with a detailed validation and evaluation of the SAUNA CFD system for complex aircraft configurations. The methodology of the complete system is described in brief, including its unique use of differing grid generation strategies (structured, unstructured or both) depending on the geometric complexity of the configuration. A wide range of configurations and flow conditions are chosen in the validation and evaluation exercise to demonstrate the scope of SAUNA. A detailed description of the results from the method is preceded by a discussion on the philosophy behind the strategy followed in the exercise, in terms of equality assessment and the differing roles of the code developer and the code user. It is considered that SAUNA has grown into a highly usable tool for the aircraft designer, in combining flexibility and accuracy in an efficient manner.

  18. Validation of the self-assessment teamwork tool (SATT) in a cohort of nursing and medical students.

    PubMed

    Roper, Lucinda; Shulruf, Boaz; Jorm, Christine; Currie, Jane; Gordon, Christopher J

    2018-02-09

    Poor teamwork has been implicated in medical error and teamwork training has been shown to improve patient care. Simulation is an effective educational method for teamwork training. Post-simulation reflection aims to promote learning and we have previously developed a self-assessment teamwork tool (SATT) for health students to measure teamwork performance. This study aimed to evaluate the psychometric properties of a revised self-assessment teamwork tool. The tool was tested in 257 medical and nursing students after their participation in one of several mass casualty simulations. Using exploratory and confirmatory factor analysis, the revised self-assessment teamwork tool was shown to have strong construct validity, high reliability, and the construct demonstrated invariance across groups (Medicine & Nursing). The modified SATT was shown to be a reliable and valid student self-assessment tool. The SATT is a quick and practical method of guiding students' reflection on important teamwork skills.

  19. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  20. Development of synthetic nuclear melt glass for forensic analysis.

    PubMed

    Molgaard, Joshua J; Auxier, John D; Giminaro, Andrew V; Oldham, C J; Cook, Matthew T; Young, Stephen A; Hall, Howard L

    A method for producing synthetic debris similar to the melt glass produced by nuclear surface testing is demonstrated. Melt glass from the first nuclear weapon test (commonly referred to as trinitite) is used as the benchmark for this study. These surrogates can be used to simulate a variety of scenarios and will serve as a tool for developing and validating forensic analysis methods.

  1. Reliability and validity of the Wolfram Unified Rating Scale (WURS)

    PubMed Central

    2012-01-01

    Background Wolfram syndrome (WFS) is a rare, neurodegenerative disease that typically presents with childhood onset insulin dependent diabetes mellitus, followed by optic atrophy, diabetes insipidus, deafness, and neurological and psychiatric dysfunction. There is no cure for the disease, but recent advances in research have improved understanding of the disease course. Measuring disease severity and progression with reliable and validated tools is a prerequisite for clinical trials of any new intervention for neurodegenerative conditions. To this end, we developed the Wolfram Unified Rating Scale (WURS) to measure the severity and individual variability of WFS symptoms. The aim of this study is to develop and test the reliability and validity of the Wolfram Unified Rating Scale (WURS). Methods A rating scale of disease severity in WFS was developed by modifying a standardized assessment for another neurodegenerative condition (Batten disease). WFS experts scored the representativeness of WURS items for the disease. The WURS was administered to 13 individuals with WFS (6-25 years of age). Motor, balance, mood and quality of life were also evaluated with standard instruments. Inter-rater reliability, internal consistency reliability, concurrent, predictive and content validity of the WURS were calculated. Results The WURS had high inter-rater reliability (ICCs>.93), moderate to high internal consistency reliability (Cronbach’s α = 0.78-0.91) and demonstrated good concurrent and predictive validity. There were significant correlations between the WURS Physical Assessment and motor and balance tests (rs>.67, p<.03), between the WURS Behavioral Scale and reports of mood and behavior (rs>.76, p<.04) and between WURS Total scores and quality of life (rs=-.86, p=.001). The WURS demonstrated acceptable content validity (Scale-Content Validity Index=0.83). Conclusions These preliminary findings demonstrate that the WURS has acceptable reliability and validity and captures individual differences in disease severity in children and young adults with WFS. PMID:23148655

  2. Asthma Symptom Utility Index: Reliability, validity, responsiveness and the minimal important difference in adult asthma patients

    PubMed Central

    Bime, Christian; Wei, Christine Y.; Holbrook, Janet T.; Sockrider, Marianna M.; Revicki, Dennis A.; Wise, Robert A.

    2012-01-01

    Background The evaluation of asthma symptoms is a core outcome measure in asthma clinical research. The Asthma Symptom Utility Index (ASUI) was developed to assess frequency and severity of asthma symptoms. The psychometric properties of the ASUI are not well characterized and a minimal important difference (MID) is not established. Objectives We assessed the reliability, validity, and responsiveness to change of the ASUI in a population of adult asthma patients. We also sought to determine the MID for the ASUI. Methods Adult asthma patients (n = 1648) from two previously completed multicenter randomized trials were included. Demographic information, spirometry, ASUI scores, and other asthma questionnaire scores were obtained at baseline and during follow-up visits. Participants also kept a daily asthma diary. Results Internal consistency reliability of the ASUI was 0.74 (Cronbach’s alpha). Test-retest reliability was 0.76 (intra-class correlation). Construct validity was demonstrated by significant correlations between ASUI scores and Asthma Control Questionnaire (ACQ) scores (Spearman correlation r = −0.79, 95% CI [−0.85, −0.75], P<0.001) and Mini Asthma Quality of Life Questionnaire (Mini AQLQ) scores (r = 0.59, 95% CI [0.51, 0.61], P<0.001). Responsiveness to change was demonstrated, with significant differences between mean changes in ASUI score across groups of participants differing by 10% in the percent predicted FEV1 (P<0.001), and by 0.5 points in ACQ score (P < 0.001). Anchor-based methods and statistical methods support an MID for the ASUI of 0.09 points. Conclusions The ASUI is reliable, valid, and responsive to changes in asthma control over time. The MID of the ASUI (range of scores 0–1) is 0.09. PMID:23026499

  3. Validation of a method for quantitation of the clopidogrel active metabolite, clopidogrel, clopidogrel carboxylic acid, and 2-oxo-clopidogrel in feline plasma.

    PubMed

    Lyngby, Janne G; Court, Michael H; Lee, Pamela M

    2017-08-01

    The clopidogrel active metabolite (CAM) is unstable and challenging to quantitate. The objective was to validate a new method for stabilization and quantitation of CAM, clopidogrel, and the inactive metabolites clopidogrel carboxylic acid and 2-oxo-clopiodgrel in feline plasma. Two healthy cats administered clopidogrel to demonstrate assay in vivo utility. Stabilization of CAM was achieved by adding 2-bromo-3'methoxyacetophenone to blood tubes to form a derivatized CAM (CAM-D). Method validation included evaluation of calibration curve linearity, accuracy, and precision; within and between assay precision and accuracy; and compound stability using spiked blank feline plasma. Analytes were measured by high performance liquid chromatography with tandem mass spectrometry. In vivo utility was demonstrated by a pharmacokinetic study of cats given a single oral dose of 18.75mg clopidogrel. The 2-oxo-clopidogrel metabolite was unstable. Clopidogrel, CAM-D, and clopidogrel carboxylic acid appear stable for 1 week at room temperature and 9 months at -80°C. Standard curves showed linearity for CAM-D, clopidogrel, and clopidogrel carboxylic acid (r > 0.99). Between assay accuracy and precision was ≤2.6% and ≤7.1% for CAM-D and ≤17.9% and ≤11.3% for clopidogrel and clopidogrel carboxylic acid. Within assay precision for all three compounds was ≤7%. All three compounds were detected in plasma from healthy cats receiving clopidogrel. This methodology is accurate and precise for simultaneous quantitation of CAM-D, clopidogrel, and clopidogrel carboxylic acid in feline plasma but not 2-oxo-clopidogrel. Validation of this assay is the first step to more fully understanding the use of clopidogrel in cats. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Development and validation of a LC-MS/MS assay for quantitation of plasma citrulline for application to animal models of the acute radiation syndrome across multiple species.

    PubMed

    Jones, Jace W; Tudor, Gregory; Bennett, Alexander; Farese, Ann M; Moroni, Maria; Booth, Catherine; MacVittie, Thomas J; Kane, Maureen A

    2014-07-01

    The potential risk of a radiological catastrophe highlights the need for identifying and validating potential biomarkers that accurately predict radiation-induced organ damage. A key target organ that is acutely sensitive to the effects of irradiation is the gastrointestinal (GI) tract, referred to as the GI acute radiation syndrome (GI-ARS). Recently, citrulline has been identified as a potential circulating biomarker for radiation-induced GI damage. Prior to biologically validating citrulline as a biomarker for radiation-induced GI injury, there is the important task of developing and validating a quantitation assay for citrulline detection within the radiation animal models used for biomarker validation. Herein, we describe the analytical development and validation of citrulline detection using a liquid chromatography tandem mass spectrometry assay that incorporates stable-label isotope internal standards. Analytical validation for specificity, linearity, lower limit of quantitation, accuracy, intra- and interday precision, extraction recovery, matrix effects, and stability was performed under sample collection and storage conditions according to the Guidance for Industry, Bioanalytical Methods Validation issued by the US Food and Drug Administration. In addition, the method was biologically validated using plasma from well-characterized mouse, minipig, and nonhuman primate GI-ARS models. The results demonstrated that circulating citrulline can be confidently quantified from plasma. Additionally, circulating citrulline displayed a time-dependent response for radiological doses covering GI-ARS across multiple species.

  5. Acoustic Parametric Array for Identifying Standoff Targets

    NASA Astrophysics Data System (ADS)

    Hinders, M. K.; Rudd, K. E.

    2010-02-01

    An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.

  6. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  7. Development and validation of a stability-indicating size-indicating size-exclusion LC method for the determination of rhIFN-alpha2a in pharmaceutical formulations.

    PubMed

    Zimmermann, Estevan Sonego; da Silva, Lucélia Magalhães; Calegari, Guilherme Zanini; Stamm, Fernanda Pavani; Souto, Ricardo Bizogne; Dalmora, Sérgio Luiz

    2013-01-01

    A size-exclusion LC method was validated for the determination of interferon-a2a (rhlFN-alpha2a) in pharmaceutical formulations without interference from human serum albumin. Chromatographic separation was performed on a BioSep-SEC-S 2000 column (300 x 7.8 mm id). The mobile phase consisted of 0.001 M monobasic potassium phosphate, 0.008 M sodium phosphate dibasic; 0.2 M sodium chloride buffer, pH 7.4, run at a gradient flow rate and using photodiode array detection at 214 nm, was used. Chromatographic separation was achieved with a retention time of 17.2 min, and the analysis was linear over the concentration range of 1.98 to 198 microg/mL (r2 = 0.9996). The accuracy was 101.39%, with bias lower than 1.67%. The LOD and LOQ were 0.87 and 1.98 microg/mL, respectively. Moreover, method validation demonstrated acceptable results for precision and robustness. The method was applied to the assessment of rhlFN-alpha2a and related proteins in biopharmaceutical dosage forms, and the content/potencies were correlated to those given by a validated RP-LC method and an in vitro bioassay. It was concluded that use of the methods in conjunction allows a great improvement in monitoring stability and QC, thereby ensuring the therapeutic efficacy of the biotechnology-derived medicine.

  8. The medline UK filter: development and validation of a geographic search filter to retrieve research about the UK from OVID medline.

    PubMed

    Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel

    2017-07-13

    A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  9. Method development and validation for the simultaneous determination of organochlorine and organophosphorus pesticides in a complex sediment matrix.

    PubMed

    Alcántara-Concepción, Victor; Cram, Silke; Gibson, Richard; Ponce de León, Claudia; Mazari-Hiriart, Marisa

    2013-01-01

    The Xochimilco area in the southeastern part of Mexico City has a variety of socioeconomic activities, such as periurban agriculture, which is of great importance in the Mexico City metropolitan area. Pesticides are used extensively, some being legal, mostly chlorpyrifos and malathion, and some illegal, mostly DDT. Sediments are a common sink for pesticides in aquatic systems near agricultural areas, and Xochimilco sediments have a complex composition with high contents of organic matter and clay that are ideal adsorption sites for organochlorine (OC) and organophosphorus (OP) pesticides. Therefore, it is important to have a quick, affordable, and reliable method to determine these pesticides. Conventional methods for the determination of OC and OP pesticides are long, laborious, and costly owing to the high volume of solvents and adsorbents. The present study developed and validated a method for determining 18 OC and five OP pesticides in sediments with high organic and clay contents. In contrast with other methods described in the literature, this method allows isolation of the 23 pesticides with a 12 min microwave-assisted extraction (MAE) and one-step cleanup of pesticides. The method developed is a simpler, time-saving procedure that uses only 3.5 g of dry sediment. The use of MAE eliminates excessive handling and the possible loss of analytes. It was shown that the use of LC-Si cartridges with hexane-ethyl acetate (75+25, v/v) in the cleanup procedure recovered all pesticides with rates between 70 and 120%. The validation parameters demonstrated good performance of the method, with intermediate precision ranging from 7.3 to 17.0%, HorRat indexes all below 0.5, and tests of accuracy with the 23 pesticides at three concentration levels demonstrating recoveries ranging from 74 to 114% and RSDs from 3.3 to 12.7%.

  10. Determination of glomerular filtration rate (GFR) from fractional renal accumulation of iodinated contrast material: a convenient and rapid single-kidney CT-GFR technique.

    PubMed

    Yuan, XiaoDong; Tang, Wei; Shi, WenWei; Yu, Libao; Zhang, Jing; Yuan, Qing; You, Shan; Wu, Ning; Ao, Guokun; Ma, Tingting

    2018-07-01

    To develop a convenient and rapid single-kidney CT-GFR technique. One hundred and twelve patients referred for multiphasic renal CT and 99mTc-DTPA renal dynamic imaging Gates-GFR measurement were prospectively included and randomly divided into two groups of 56 patients each: the training group and the validation group. On the basis of the nephrographic phase images, the fractional renal accumulation (FRA) was calculated and correlated with the Gates-GFR in the training group. From this correlation a formula was derived for single-kidney CT-GFR calculation, which was validated by a paired t test and linear regression analysis with the single-kidney Gates-GFR in the validation group. In the training group, the FRA (x-axis) correlated well (r = 0.95, p < 0.001) with single-kidney Gates-GFR (y-axis), producing a regression equation of y = 1665x + 1.5 for single-kidney CT-GFR calculation. In the validation group, the difference between the methods of single-kidney GFR measurements was 0.38 ± 5.57 mL/min (p = 0.471); the regression line is identical to the diagonal (intercept = 0 and slope = 1) (p = 0.727 and p = 0.473, respectively), with a standard deviation of residuals of 5.56 mL/min. A convenient and rapid single-kidney CT-GFR technique was presented and validated in this investigation. • The new CT-GFR method takes about 2.5 min of patient time. • The CT-GFR method demonstrated identical results to the Gates-GFR method. • The CT-GFR method is based on the fractional renal accumulation of iodinated CM. • The CT-GFR method is achieved without additional radiation dose to the patient.

  11. Validation of a Plasma-Based Comprehensive Cancer Genotyping Assay Utilizing Orthogonal Tissue- and Plasma-Based Methodologies.

    PubMed

    Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli

    2018-04-24

    Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018 AACR. ©2018 American Association for Cancer Research.

  12. The response dynamics of preferential choice.

    PubMed

    Koop, Gregory J; Johnson, Joseph G

    2013-12-01

    The ubiquity of psychological process models requires an increased degree of sophistication in the methods and metrics that we use to evaluate them. We contribute to this venture by capitalizing on recent work in cognitive science analyzing response dynamics, which shows that the bearing information processing dynamics have on intended action is also revealed in the motor system. This decidedly "embodied" view suggests that researchers are missing out on potential dependent variables with which to evaluate their models-those associated with the motor response that produces a choice. The current work develops a method for collecting and analyzing such data in the domain of decision making. We first validate this method using widely normed stimuli from the International Affective Picture System (Experiment 1), and demonstrate that curvature in response trajectories provides a metric of the competition between choice options. We next extend the method to risky decision making (Experiment 2) and develop predictions for three popular classes of process model. The data provided by response dynamics demonstrate that choices contrary to the maxim of risk seeking in losses and risk aversion in gains may be the product of at least one "online" preference reversal, and can thus begin to discriminate amongst the candidate models. Finally, we incorporate attentional data collected via eye-tracking (Experiment 3) to develop a formal computational model of joint information sampling and preference accumulation. In sum, we validate response dynamics for use in preferential choice tasks and demonstrate the unique conclusions afforded by response dynamics over and above traditional methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Validity and Reliability of Accelerometers in Patients With COPD: A SYSTEMATIC REVIEW.

    PubMed

    Gore, Shweta; Blackwood, Jennifer; Guyette, Mary; Alsalaheen, Bara

    2018-05-01

    Reduced physical activity is associated with poor prognosis in chronic obstructive pulmonary disease (COPD). Accelerometers have greatly improved quantification of physical activity by providing information on step counts, body positions, energy expenditure, and magnitude of force. The purpose of this systematic review was to compare the validity and reliability of accelerometers used in patients with COPD. An electronic database search of MEDLINE and CINAHL was performed. Study quality was assessed with the Strengthening the Reporting of Observational Studies in Epidemiology checklist while methodological quality was assessed using the modified Quality Appraisal Tool for Reliability Studies. The search yielded 5392 studies; 25 met inclusion criteria. The SenseWear Pro armband reported high criterion validity under controlled conditions (r = 0.75-0.93) and high reliability (ICC = 0.84-0.86) for step counts. The DynaPort MiniMod demonstrated highest concurrent validity for step count using both video and manual methods. Validity of the SenseWear Pro armband varied between studies especially in free-living conditions, slower walking speeds, and with addition of weights during gait. A high degree of variability was found in the outcomes used and statistical analyses performed between studies, indicating a need for further studies to measure reliability and validity of accelerometers in COPD. The SenseWear Pro armband is the most commonly used accelerometer in COPD, but measurement properties are limited by gait speed variability and assistive device use. DynaPort MiniMod and Stepwatch accelerometers demonstrated high validity in patients with COPD but lack reliability data.

  14. The Arthroscopic Surgical Skill Evaluation Tool (ASSET)

    PubMed Central

    Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.

    2014-01-01

    Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808

  15. Validated chromatographic and spectrophotometric methods for analysis of some amoebicide drugs in their combined pharmaceutical preparation.

    PubMed

    Abdelaleem, Eglal Adelhamid; Abdelwahab, Nada Sayed

    2013-01-01

    This work is concerned with development and validation of chromatographic and spectrophotometric methods for analysis of mebeverine HCl (MEH), diloxanide furoate (DF) and metronidazole (MET) in Dimetrol® tablets - spectrophotometric and RP-HPLC methods using UV detection. The developed spectrophotometric methods depend on determination of MEH and DF in the combined dosage form using the successive derivative ratio spectra method which depends on derivatization of the obtained ratio spectra in two steps using methanol as a solvent and measuring MEH at 226.4-232.2 nm (peak to peak) and DF at 260.6-264.8 nm (peak to peak). While MET concentrations were determined using first derivative (1D) at λ = 327 nm using the same solvent. The chromatographic method depends on HPLC separation on ODS column and elution with a mobile phase consisting water: methanol: triethylamine (25: 75: 0.5, by volume, orthophosphoric acid to pH =4). Pumping the mobile phase at 0.7 ml min-1 with UV at 230 nm. Factors affecting the developed methods were studied and optimized, moreover, they have been validated as per ICH guideline and the results demonstrated that the suggested methods are reproducible, reliable and can be applied for routine use with short time of analysis. Statistical analysis of the two developed methods with each other using F and student's-t tests showed no significant difference.

  16. Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.

    PubMed

    Chung, Yi-Shih

    2013-12-01

    Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. 77 FR 72226 - Picoxystrobin; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... one of the following methods: Federal eRulemaking Portal: http://www.regulations.gov . Follow the... validity, completeness, and reliability as well as the relationship of the results of the studies to human... as demonstrated by the severe eye irritation effect seen in the primary eye irritation study on...

  18. Psychosocial Acute Treatment in Early-Episode Schizophrenia Disorders

    ERIC Educational Resources Information Center

    Bola, John R.

    2006-01-01

    Objective: This article reviews evidence on the treatment of early episode schizophrenia spectrum disorders that contradicts, in some cases, the American Psychiatric Association's generic recommendation of antipsychotic medication treatment for at least a year. Method: Evidence on lack of diagnostic validity, absence of demonstrated long-term…

  19. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin.

    PubMed

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2015-11-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.

  20. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin

    PubMed Central

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2016-01-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a “complete mystical experience” that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. PMID:26442957

  1. The "Case of Two Compounds with Similar Configuration but Nearly Mirror Image CD Spectra" Refuted. Reassignment of the Absolute Configuration of N-Formyl-3',4'-dihydrospiro[indan-1,2'(1'H)-pyridine].

    PubMed

    Padula, Daniele; Di Bari, Lorenzo; Pescitelli, Gennaro

    2016-09-02

    In 1997, Sandström and co-workers reported the case of two chiral spiro compounds with very similar skeletons but showing almost mirror-image electronic circular dichroism (ECD) spectra for the corresponding absolute configuration. The paper has been often cited as a proof and good educational example of the pronounced sensitivity of ECD toward molecular conformation, and a clear warning against the use of ECD spectral correlations to assign absolute configurations. Although both concepts remain valid, they are not exemplified by the quoted paper. We demonstrate that the original configurational assignment of one compound was wrong and revise it by using TDDFT calculations. The main reason for the observed failure is the use of the matrix method, a popular approach to predict ECD spectra of compounds which can be treated with an independent system approximation (ISA), including proteins. Using a modern version of the matrix method, we demonstrate that the ISA is not valid for the title compound. Even in the absence of apparent conjugation between the component chromophores, the validity of the ISA should never be taken for granted and the effective extent of orbital overlap should always be verified.

  2. Design and Validation of an Augmented Reality System for Laparoscopic Surgery in a Real Environment

    PubMed Central

    López-Mir, F.; Naranjo, V.; Fuertes, J. J.; Alcañiz, M.; Bueno, J.; Pareja, E.

    2013-01-01

    Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery. PMID:24236293

  3. Systematic review of methods for quantifying teamwork in the operating theatre

    PubMed Central

    Marshall, D.; Sykes, M.; McCulloch, P.; Shalhoub, J.; Maruthappu, M.

    2018-01-01

    Background Teamwork in the operating theatre is becoming increasingly recognized as a major factor in clinical outcomes. Many tools have been developed to measure teamwork. Most fall into two categories: self‐assessment by theatre staff and assessment by observers. A critical and comparative analysis of the validity and reliability of these tools is lacking. Methods MEDLINE and Embase databases were searched following PRISMA guidelines. Content validity was assessed using measurements of inter‐rater agreement, predictive validity and multisite reliability, and interobserver reliability using statistical measures of inter‐rater agreement and reliability. Quantitative meta‐analysis was deemed unsuitable. Results Forty‐eight articles were selected for final inclusion; self‐assessment tools were used in 18 and observational tools in 28, and there were two qualitative studies. Self‐assessment of teamwork by profession varied with the profession of the assessor. The most robust self‐assessment tool was the Safety Attitudes Questionnaire (SAQ), although this failed to demonstrate multisite reliability. The most robust observational tool was the Non‐Technical Skills (NOTECHS) system, which demonstrated both test–retest reliability (P > 0·09) and interobserver reliability (Rwg = 0·96). Conclusion Self‐assessment of teamwork by the theatre team was influenced by professional differences. Observational tools, when used by trained observers, circumvented this.

  4. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  5. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  6. Integration of design and inspection

    NASA Astrophysics Data System (ADS)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  7. Prediction of Protein-Protein Interaction Sites with Machine-Learning-Based Data-Cleaning and Post-Filtering Procedures.

    PubMed

    Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun

    2016-04-01

    Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.

  8. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  9. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  10. Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).

    PubMed

    Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando

    2014-07-02

    Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  11. Extension of the ratio method to low energy

    DOE PAGES

    Colomer, Frederic; Capel, Pierre; Nunes, F. M.; ...

    2016-05-25

    The ratio method has been proposed as a means to remove the reaction model dependence in the study of halo nuclei. Originally, it was developed for higher energies but given the potential interest in applying the method at lower energy, in this work we explore its validity at 20 MeV/nucleon. The ratio method takes the ratio of the breakup angular distribution and the summed angular distribution (which includes elastic, inelastic and breakup) and uses this observable to constrain the features of the original halo wave function. In this work we use the Continuum Discretized Coupled Channel method and the Coulomb-correctedmore » Dynamical Eikonal Approximation for the study. We study the reactions of 11Be on 12C, 40Ca and 208Pb at 20 MeV/nucleon. We compare the various theoretical descriptions and explore the dependence of our result on the core-target interaction. Lastly, our study demonstrates that the ratio method is valid at these lower beam energies.« less

  12. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    PubMed

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  13. Development and Psychometric Evaluation of the HPV Clinical Trial Survey for Parents (CTSP‐HPV) Using Traditional Survey Development Methods and Community Engagement Principles

    PubMed Central

    Wallston, Kenneth A.; Wilkins, Consuelo H.; Hull, Pamela C.; Miller, Stephania T.

    2015-01-01

    Abstract Objective This study describes the development and psychometric evaluation of HPV Clinical Trial Survey for Parents with Children Aged 9 to 15 (CTSP‐HPV) using traditional instrument development methods and community engagement principles. Methods An expert panel and parental input informed survey content and parents recommended study design changes (e.g., flyer wording). A convenience sample of 256 parents completed the final survey measuring parental willingness to consent to HPV clinical trial (CT) participation and other factors hypothesized to influence willingness (e.g., HPV vaccine benefits). Cronbach's a, Spearman correlations, and multiple linear regression were used to estimate internal consistency, convergent and discriminant validity, and predictively validity, respectively. Results Internal reliability was confirmed for all scales (a ≥ 0.70.). Parental willingness was positively associated (p < 0.05) with trust in medical researchers, adolescent CT knowledge, HPV vaccine benefits, advantages of adolescent CTs (r range 0.33–0.42), supporting convergent validity. Moderate discriminant construct validity was also demonstrated. Regression results indicate reasonable predictive validity with the six scales accounting for 31% of the variance in parents’ willingness. Conclusions This instrument can inform interventions based on factors that influence parental willingness, which may lead to the eventual increase in trial participation. Further psychometric testing is warranted. PMID:26530324

  14. Reliability and validity of the adolescent health profile-types.

    PubMed

    Riley, A W; Forrest, C B; Starfield, B; Green, B; Kang, M; Ensminger, M

    1998-08-01

    The purpose of this study was to demonstrate the preliminary reliability and validity of a set 13 profiles of adolescent health that describe distinct patterns of health and health service requirements on four domains of health. Reliability and validity were tested in four ethnically diverse population samples of urban and rural youths aged 11 to 17-years-old in public schools (N = 4,066). The reliability of the classification procedure and construct validity were examined in terms of the predicted and actual distributions of age, gender, race, socioeconomic status, and family type. School achievement, medical conditions, and the proportion of youths with a psychiatric disorder also were examined as tests of construct validity. The classification method was shown to produce consistent results across the four populations in terms of proportions of youths assigned with specific sociodemographic characteristics. Variations in health described by specific profiles showed expected relations to sociodemographic characteristics, family structure, school achievement, medical disorders, and psychiatric disorders. This taxonomy of health profile-types appears to effectively describe a set of patterns that characterize adolescent health. The profile-types provide a unique and practical method for identifying subgroups having distinct needs for health services, with potential utility for health policy and planning. Such integrative reporting methods are critical for more effective utilization of health status instruments in health resource planning and policy development.

  15. Mixed-methods development of a new patient-reported outcome instrument for chronic low back pain: part 1-the Patient Assessment for Low Back Pain - Symptoms (PAL-S).

    PubMed

    Martin, Mona L; Blum, Steven I; Liedgens, Hiltrud; Bushnell, Donald M; McCarrier, Kelly P; Hatley, Noël V; Ramasamy, Abhilasha; Freynhagen, Rainer; Wallace, Mark; Argoff, Charles; Eerdekens, Mariёlle; Kok, Maurits; Patrick, Donald L

    2018-06-01

    We describe the mixed-methods (qualitative and quantitative) development and preliminary validation of the Patient Assessment for Low Back Pain-Symptoms (PAL-S), a patient-reported outcome measure for use in chronic low back pain (cLBP) clinical trials. Qualitative methods (concept elicitation and cognitive interviews) were used to identify and refine symptom concepts and quantitative methods (classical test theory and Rasch measurement theory) were used to evaluate item- and scale-level performance of the measure using an iterative approach. Patients with cLBP participated in concept elicitation interviews (N = 43), cognitive interviews (N = 38), and interview-based assessment of paper-to-electronic mode equivalence (N = 8). A web-based sample of patients with self-reported cLBP participated in quantitative studies to evaluate preliminary (N = 598) and revised (n = 401) drafts and a physician-diagnosed cohort of patients with cLBP (N = 45) participated in preliminary validation of the measure. The PAL-S contained 14 items describing symptoms (overall pain, sharp, prickling, sensitive, tender, radiating, shocking, shooting, burning, squeezing, muscle spasms, throbbing, aching, and stiffness). Item-level performance, scale structure, and scoring seemed to be appropriate. One-week test-retest reproducibility was acceptable (intraclass correlation coefficient 0.81 [95% confidence interval, 0.61-0.91]). Convergent validity was demonstrated with total score and MOS-36 Bodily Pain (Pearson correlation -0.79), Neuropathic Pain Symptom Inventory (0.73), Roland-Morris Disability Questionnaire (0.67), and MOS-36 Physical Functioning (-0.65). Individual item scores and total score discriminated between numeric rating scale tertile groups and painDETECT categories. Respondent interpretation of paper and electronic administration modes was equivalent. The PAL-S has demonstrated content validity and is potentially useful to assess treatment benefit in cLBP clinical trials.

  16. Evaluation of the Thermo Scientific™ SureTect™ Salmonella species Assay.

    PubMed

    Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-03-01

    The Thermo Scientific™ SureTect™ Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  17. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    PubMed

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de

    2017-08-10

    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  18. Targeted Mass Spectrometric Approach for Biomarker Discovery and Validation with Nonglycosylated Tryptic Peptides from N-linked Glycoproteins in Human Plasma*

    PubMed Central

    Lee, Ju Yeon; Kim, Jin Young; Park, Gun Wook; Cheon, Mi Hee; Kwon, Kyung-Hoon; Ahn, Yeong Hee; Moon, Myeong Hee; Lee, Hyoung–Joo; Paik, Young Ki; Yoo, Jong Shin

    2011-01-01

    A simple mass spectrometric approach for the discovery and validation of biomarkers in human plasma was developed by targeting nonglycosylated tryptic peptides adjacent to glycosylation sites in an N-linked glycoprotein, one of the most important biomarkers for early detection, prognoses, and disease therapies. The discovery and validation of novel biomarkers requires complex sample pretreatment steps, such as depletion of highly abundant proteins, enrichment of desired proteins, or the development of new antibodies. The current study exploited the steric hindrance of glycan units in N-linked glycoproteins, which significantly affects the efficiency of proteolytic digestion if an enzymatically active amino acid is adjacent to the N-linked glycosylation site. Proteolytic digestion then results in quantitatively different peptide products in accordance with the degree of glycosylation. The effect of glycan steric hindrance on tryptic digestion was first demonstrated using alpha-1-acid glycoprotein (AGP) as a model compound versus deglycosylated alpha-1-acid glycoprotein. Second, nonglycosylated tryptic peptide biomarkers, which generally show much higher sensitivity in mass spectrometric analyses than their glycosylated counterparts, were quantified in human hepatocellular carcinoma plasma using a label-free method with no need for N-linked glycoprotein enrichment. Finally, the method was validated using a multiple reaction monitoring analysis, demonstrating that the newly discovered nonglycosylated tryptic peptide targets were present at different levels in normal and hepatocellular carcinoma plasmas. The area under the receiver operating characteristic curve generated through analyses of nonglycosylated tryptic peptide from vitronectin precursor protein was 0.978, the highest observed in a group of patients with hepatocellular carcinoma. This work provides a targeted means of discovering and validating nonglycosylated tryptic peptides as biomarkers in human plasma, without the need for complex enrichment processes or expensive antibody preparations. PMID:21940909

  19. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods.

    PubMed

    Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin

    2018-01-01

    Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  20. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  1. Validate or falsify: Lessons learned from a microscopy method claimed to be useful for detecting Borrelia and Babesia organisms in human blood.

    PubMed

    Aase, Audun; Hajdusek, Ondrej; Øines, Øivind; Quarsten, Hanne; Wilhelmsson, Peter; Herstad, Tove K; Kjelland, Vivian; Sima, Radek; Jalovecka, Marie; Lindgren, Per-Eric; Aaberge, Ingeborg S

    2016-01-01

    A modified microscopy protocol (the LM-method) was used to demonstrate what was interpreted as Borrelia spirochetes and later also Babesia sp., in peripheral blood from patients. The method gained much publicity, but was not validated prior to publication, which became the purpose of this study using appropriate scientific methodology, including a control group. Blood from 21 patients previously interpreted as positive for Borrelia and/or Babesia infection by the LM-method and 41 healthy controls without known history of tick bite were collected, blinded and analysed for these pathogens by microscopy in two laboratories by the LM-method and conventional method, respectively, by PCR methods in five laboratories and by serology in one laboratory. Microscopy by the LM-method identified structures claimed to be Borrelia- and/or Babesia in 66% of the blood samples of the patient group and in 85% in the healthy control group. Microscopy by the conventional method for Babesia only did not identify Babesia in any samples. PCR analysis detected Borrelia DNA in one sample of the patient group and in eight samples of the control group; whereas Babesia DNA was not detected in any of the blood samples using molecular methods. The structures interpreted as Borrelia and Babesia by the LM-method could not be verified by PCR. The method was, thus, falsified. This study underlines the importance of doing proper test validation before new or modified assays are introduced.

  2. Can virtual reality simulation be used for advanced bariatric surgical training?

    PubMed

    Lewis, Trystan M; Aggarwal, Rajesh; Kwasnicki, Richard M; Rajaretnam, Niro; Moorthy, Krishna; Ahmed, Ahmed; Darzi, Ara

    2012-06-01

    Laparoscopic bariatric surgery is a safe and effective way of treating morbid obesity. However, the operations are technically challenging and training opportunities for junior surgeons are limited. This study aims to assess whether virtual reality (VR) simulation is an effective adjunct for training and assessment of laparoscopic bariatric technical skills. Twenty bariatric surgeons of varying experience (Five experienced, five intermediate, and ten novice) were recruited to perform a jejuno-jejunostomy on both cadaveric tissue and on the bariatric module of the Lapmentor VR simulator (Simbionix Corporation, Cleveland, OH). Surgical performance was assessed using validated global rating scales (GRS) and procedure specific video rating scales (PSRS). Subjects were also questioned about the appropriateness of VR as a training tool for surgeons. Construct validity of the VR bariatric module was demonstrated with a significant difference in performance between novice and experienced surgeons on the VR jejuno-jejunostomy module GRS (median 11-15.5; P = .017) and PSRS (median 11-13; P = .003). Content validity was demonstrated with surgeons describing the VR bariatric module as useful and appropriate for training (mean Likert score 4.45/7) and they would highly recommend VR simulation to others for bariatric training (mean Likert score 5/7). Face and concurrent validity were not established. This study shows that the bariatric module on a VR simulator demonstrates construct and content validity. VR simulation appears to be an effective method for training of advanced bariatric technical skills for surgeons at the start of their bariatric training. However, assessment of technical skills should still take place on cadaveric tissue. Copyright © 2012. Published by Mosby, Inc.

  3. The Development of Accepted Performance Items to Demonstrate Braille Competence in the Nemeth Code for Mathematics and Science Notation

    ERIC Educational Resources Information Center

    Smith, Derrick; Rosenblum, L. Penny

    2013-01-01

    Introduction: The purpose of the study presented here was the initial validation of a comprehensive set of competencies focused solely on the Nemeth code. Methods: Using the Delphi method, 20 expert panelists were recruited to participate in the study on the basis of their past experience in teaching a university-level course in the Nemeth code.…

  4. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.

  5. Computation of Three-Dimensional Boundary Layers Including Separation

    DTIC Science & Technology

    1987-02-01

    As demonstrated by the 1968 and 1980 -1981 STANFORD Conferences, integral methods remain a valuable engineering tool to calculate the effects of...has been given by WHITFIELD, 1980 , which is valid over the whole thickness of the boundary layer. Another method to generate a velocity profiles...boundary layer equations and inviscid equations. A very clear presentation of the problem is given for example by VELOMAN, 1980 . 6.3. Three-dimensional

  6. Laser light-scattering spectroscopy: a new application in the study of ciliary activity.

    PubMed Central

    Lee, W I; Verdugo, P

    1976-01-01

    A uniquely precise and simple method to study ciliary activity by laser light-scattering spectroscopy has been developed and validated. A concurrent study of the effect of Ca2+ on ciliary activity in vitro by laser scattering spectroscopy and high speed cinematography has demonstrated that this new method is simpler and as accurate and reproducible as the high speed film technique. PMID:963208

  7. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  8. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  9. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radulescu, Georgeta; Gauld, Ian C; Ilas, Germina

    2011-01-01

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address themore » issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias and uncertainty results based on a quality-assurance-controlled prerelease version of the Scale 6.1 code package and the ENDF/B-VII nuclear cross section data.« less

  10. Psychometric properties of the communication skills attitude scale (CSAS) measure in a sample of Iranian medical students

    PubMed Central

    YAKHFOROSHHA, AFSANEH; SHIRAZI, MANDANA; YOUSEFZADEH, NASER; GHANBARNEJAD, AMIN; CHERAGHI, MOHAMMADALI; MOJTAHEDZADEH, RITA; MAHMOODI-BAKHTIARI, BEHROOZ; EMAMI, SEYED AMIR HOSSEIN

    2018-01-01

    Introduction: Communication skill (CS) has been regarded as one of the fundamental competencies for medical and other health care professionals. Student's attitude toward learning CS is a key factor in designing educational interventions. The original CSAS, as positive and negative subscales, was developed in the UK; however, there is no scale to measure these attitudes in Iran. The aim of this study was to assess the psychometric characteristic of the Communication Skills Attitude Scale (CSAS), in an Iranian context and to understand if it is a valid tool to assess attitude toward learning communication skills among health care professionals. Methods: Psychometric characteristics of the CSAS were assessed by using a cross-sectional design. In the current study, 410 medical students were selected using stratified sampling framework. The face validity of the scale was estimated through students and experts’ opinion. Content validity of CSAS was assessed qualitatively and quantitatively. Reliability was examined through two methods including Chronbach’s alpha coefficient and Intraclass Correlation of Coefficient (ICC). Construct validity of CSAS was assessed using confirmatory factor analysis (CFA) and explanatory factor analysis (PCA) followed by varimax rotation. Convergent and discriminant validity of the scale was measured through Spearman correlation. Statistical analysis was performed using SPSS 19 and EQS, 6.1. Results: The internal consistency and reproducibility of the total CSAS score were 0.84 (Cronbach’s alpha) and 0.81, which demonstrates an acceptable reliability of the questionnaire. The item-level content validity index (I-CVI) and the scale-level content validity index (S-CVI/Ave) demonstrated appropriate results: 0.97 and 0.94, respectively. An exploratory factor analysis (EFA) on the 25 items of the CSAS revealed 4-factor structure that all together explained %55 of the variance. Results of the confirmatory factor analysis indicated an acceptable goodness-of-fit between the model and the observed data. [χ2/df = 2.36, Comparative Fit Index (CFI) = 0.95, the GFI=0.96, Root Mean Square Error of Approximation (RMSEA) = 0.05]. Conclusion: The Persian version of CSAS is a multidimensional, valid and reliable tool for assessing attitudes towards communication skill among medical students. PMID:29344525

  11. USE OF PHARMACOKINETIC MODELING TO DESIGN STUDIES FOR PATHWAY-SPECIFIC EXPOSURE MODEL EVALUATION

    EPA Science Inventory

    Validating an exposure pathway model is difficult because the biomarker, which is often used to evaluate the model prediction, is an integrated measure for exposures from all the exposure routes/pathways. The purpose of this paper is to demonstrate a method to use pharmacokeneti...

  12. 29 CFR 1910.1001 - Asbestos.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... The designation of a material as “PACM” may be rebutted pursuant to paragraph (j)(8) of this section... level as demonstrated by a statistically valid protocol; and (C) The equivalent method is documented and... PACM are in excess of the TWA and/or excursion limit prescribed in paragraph (c) of this section. (2...

  13. 29 CFR 1910.1001 - Asbestos.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... The designation of a material as “PACM” may be rebutted pursuant to paragraph (j)(8) of this section... level as demonstrated by a statistically valid protocol; and (C) The equivalent method is documented and... PACM are in excess of the TWA and/or excursion limit prescribed in paragraph (c) of this section. (2...

  14. Curing conditions to inactivate Trichinella spiralis muscle larvae in ready-to-eat pork sausage

    USDA-ARS?s Scientific Manuscript database

    Curing processes for ready to eat (RTE) pork products currently require individual validation of methods to demonstrate inactivation of Trichinella spiralis. This is a major undertaking for each process; currently no model of meat chemistry exists that can be correlated with inactivation of Trichin...

  15. 77 FR 67771 - Flonicamid; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-14

    ... one of the following methods: Federal eRulemaking Portal: http://www.regulations.gov . Follow the... validity, completeness, and reliability as well as the relationship of the results of the studies to human... TFNA-OH, also demonstrated low toxicity in acute oral toxicity studies. In the 28-day dermal study with...

  16. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  17. Using a genetic algorithm to abbreviate the Psychopathic Personality Inventory-Revised (PPI-R).

    PubMed

    Eisenbarth, Hedwig; Lilienfeld, Scott O; Yarkoni, Tal

    2015-03-01

    Some self-report measures of personality and personality disorders, including the widely used Psychopathic Personality Inventory-Revised (PPI-R), are lengthy and time-intensive. In recent work, we introduced an automated genetic algorithm (GA)-based method for abbreviating psychometric measures. In Study 1, we used this approach to generate a short (40-item) version of the PPI-R using 3 large-N German student samples (total N = 1,590). The abbreviated measure displayed high convergent correlations with the original PPI-R, and outperformed an alternative measure constructed using a conventional approach. Study 2 tested the convergent and discriminant validity of this short version in a fourth student sample (N = 206) using sensation-seeking and sensitivity to reward and punishment scales, again demonstrating similar convergent and discriminant validity for the PPI-R-40 compared with the full version. In a fifth community sample of North American participants acquired using Amazon Mechanical Turk, the PPI-R-40 showed similarly high convergent correlations, demonstrating stability across language, culture, and data-collection method. Taken together, these studies suggest that the GA approach is a viable method for abbreviating measures of psychopathy, and perhaps personality measures in general. 2015 APA, all rights reserved

  18. Structured assessment of microsurgery skills in the clinical setting.

    PubMed

    Chan, WoanYi; Niranjan, Niri; Ramakrishnan, Venkat

    2010-08-01

    Microsurgery is an essential component in plastic surgery training. Competence has become an important issue in current surgical practice and training. The complexity of microsurgery requires detailed assessment and feedback on skills components. This article proposes a method of Structured Assessment of Microsurgery Skills (SAMS) in a clinical setting. Three types of assessment (i.e., modified Global Rating Score, errors list and summative rating) were incorporated to develop the SAMS method. Clinical anastomoses were recorded on videos using a digital microscope system and were rated by three consultants independently and in a blinded fashion. Fifteen clinical cases of microvascular anastomoses performed by trainees and a consultant microsurgeon were assessed using SAMS. The consultant had consistently the highest scores. Construct validity was also demonstrated by improvement of SAMS scores of microsurgery trainees. The overall inter-rater reliability was strong (alpha=0.78). The SAMS method provides both formative and summative assessment of microsurgery skills. It is demonstrated to be a valid, reliable and feasible assessment tool of operating room performance to provide systematic and comprehensive feedback as part of the learning cycle. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  20. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  1. G W calculations using the spectral decomposition of the dielectric matrix: Verification, validation, and comparison of methods

    DOE PAGES

    Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...

    2013-04-26

    Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less

  2. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  3. Assessment of MARMOT Grain Growth Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, B.; Zhang, Y.; Schwen, D.

    2015-12-01

    This report assesses the MARMOT grain growth model by comparing modeling predictions with experimental results from thermal annealing. The purpose here is threefold: (1) to demonstrate the validation approach of using thermal annealing experiments with non-destructive characterization, (2) to test the reconstruction capability and computation efficiency in MOOSE, and (3) to validate the grain growth model and the associated parameters that are implemented in MARMOT for UO 2. To assure a rigorous comparison, the 2D and 3D initial experimental microstructures of UO 2 samples were characterized using non-destructive Synchrotron x-ray. The same samples were then annealed at 2273K for grainmore » growth, and their initial microstructures were used as initial conditions for simulated annealing at the same temperature using MARMOT. After annealing, the final experimental microstructures were characterized again to compare with the results from simulations. So far, comparison between modeling and experiments has been done for 2D microstructures, and 3D comparison is underway. The preliminary results demonstrated the usefulness of the non-destructive characterization method for MARMOT grain growth model validation. A detailed analysis of the 3D microstructures is in progress to fully validate the current model in MARMOT.« less

  4. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.

    1992-01-01

    Quality assessment procedures are described for two-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate accuracy of an implicit upwind Euler solution algorithm.

  5. Internal validation of the prognostic index for spine metastasis (PRISM) for stratifying survival in patients treated with spinal stereotactic radiosurgery.

    PubMed

    Jensen, Garrett; Tang, Chad; Hess, Kenneth R; Bishop, Andrew J; Pan, Hubert Y; Li, Jing; Yang, James N; Tannir, Nizar M; Amini, Behrang; Tatsui, Claudio; Rhines, Laurence; Brown, Paul D; Ghia, Amol J

    2017-01-01

    We sought to validate the Prognostic Index for Spinal Metastases (PRISM), a scoring system that stratifies patients into subgroups by overall survival.Methods and materials: The PRISM was previously created from multivariate Cox regression with patients enrolled in prospective single institution trials of stereotactic spine radiosurgery (SSRS) for spinal metastasis. We assess model calibration and discrimination within a validation cohort of patients treated off-trial with SSRS for metastatic disease at the same institution. The training and validation cohorts consisted of 205 and 249 patients respectively. Similar survival trends were shown in the 4 PRISM. Survival was significantly different between PRISM subgroups (P<0.0001). C-index for the validation cohort was 0.68 after stratification into subgroups. We internally validated the PRISM with patients treated off-protocol, demonstrating that it can distinguish subgroups by survival, which will be useful for individualizing treatment of spinal metastases and stratifying patients for clinical trials.

  6. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  7. Rapid prediction of ochratoxin A-producing strains of Penicillium on dry-cured meat by MOS-based electronic nose.

    PubMed

    Lippolis, Vincenzo; Ferrara, Massimo; Cervellieri, Salvatore; Damascelli, Anna; Epifani, Filomena; Pascale, Michelangelo; Perrone, Giancarlo

    2016-02-02

    The availability of rapid diagnostic methods for monitoring ochratoxigenic species during the seasoning processes for dry-cured meats is crucial and constitutes a key stage in order to prevent the risk of ochratoxin A (OTA) contamination. A rapid, easy-to-perform and non-invasive method using an electronic nose (e-nose) based on metal oxide semiconductors (MOS) was developed to discriminate dry-cured meat samples in two classes based on the fungal contamination: class P (samples contaminated by OTA-producing Penicillium strains) and class NP (samples contaminated by OTA non-producing Penicillium strains). Two OTA-producing strains of Penicillium nordicum and two OTA non-producing strains of Penicillium nalgiovense and Penicillium salamii, were tested. The feasibility of this approach was initially evaluated by e-nose analysis of 480 samples of both Yeast extract sucrose (YES) and meat-based agar media inoculated with the tested Penicillium strains and incubated up to 14 days. The high recognition percentages (higher than 82%) obtained by Discriminant Function Analysis (DFA), either in calibration and cross-validation (leave-more-out approach), for both YES and meat-based samples demonstrated the validity of the used approach. The e-nose method was subsequently developed and validated for the analysis of dry-cured meat samples. A total of 240 e-nose analyses were carried out using inoculated sausages, seasoned by a laboratory-scale process and sampled at 5, 7, 10 and 14 days. DFA provided calibration models that permitted discrimination of dry-cured meat samples after only 5 days of seasoning with mean recognition percentages in calibration and cross-validation of 98 and 88%, respectively. A further validation of the developed e-nose method was performed using 60 dry-cured meat samples produced by an industrial-scale seasoning process showing a total recognition percentage of 73%. The pattern of volatile compounds of dry-cured meat samples was identified and characterized by a developed HS-SPME/GC-MS method. Seven volatile compounds (2-methyl-1-butanol, octane, 1R-α-pinene, d-limonene, undecane, tetradecanal, 9-(Z)-octadecenoic acid methyl ester) allowed discrimination between dry-cured meat samples of classes P and NP. These results demonstrate that MOS-based electronic nose can be a useful tool for a rapid screening in preventing OTA contamination in the cured meat supply chain. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Solution of the Bagley Torvik equation by fractional DTM

    NASA Astrophysics Data System (ADS)

    Arora, Geeta; Pratiksha

    2017-07-01

    In this paper, fractional differential transform method(DTM) is implemented on the Bagley Torvik equation. This equation models the viscoelastic behavior of geological strata, metals, glasses etc. It explains the motion of a rigid plate immersed in a Newtonian fluid. DTM is a simple, reliable and efficient method that gives a series solution. Caputo fractional derivative is considered throughout this work. Two examples are given to demonstrate the validity and applicability of the method and comparison is made with the existing results.

  9. Engineering topological edge states in two dimensional magnetic photonic crystal

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Wu, Tong; Zhang, Xiangdong

    2017-01-01

    Based on a perturbative approach, we propose a simple and efficient method to engineer the topological edge states in two dimensional magnetic photonic crystals. The topological edge states in the microstructures can be constructed and varied by altering the parameters of the microstructure according to the field-energy distributions of the Bloch states at the related Bloch wave vectors. The validity of the proposed method has been demonstrated by exact numerical calculations through three concrete examples. Our method makes the topological edge states "designable."

  10. Hydrogen dissolution in palladium: A resistometric study under pressure

    NASA Astrophysics Data System (ADS)

    Magnouche, A.; Fromageau, R.

    1984-09-01

    The hydrogen solubility in palladium in equilibrium with H2 gas has been measured, between room temperature and 540 °C, using a resistometric method, for pressures ranging between 0.01 and 10 MPa. In these conditions, the experimentally determined values of the solubility and of the dissolution enthalpy exhibit very close agreement with those obtained by other methods (calorimetry, volumetry, etc.), or after electrolytic charging. This good agreement demonstrates the validity of the resistometric method for determination of the solubility of hydrogen in metals.

  11. Distinguishing Vaccinium Species by Chemical Fingerprinting Based on NMR Spectra, Validated with Spectra Collected in Different Laboratories

    PubMed Central

    Markus, Michelle A.; Ferrier, Jonathan; Luchsinger, Sarah M.; Yuk, Jimmy; Cuerrier, Alain; Balick, Michael J.; Hicks, Joshua M.; Killday, K. Brian; Kirby, Christopher W.; Berrue, Fabrice; Kerr, Russell G.; Knagge, Kevin; Gödecke, Tanja; Ramirez, Benjamin E.; Lankin, David C.; Pauli, Guido F.; Burton, Ian; Karakach, Tobias K.; Arnason, John T.; Colson, Kimberly L.

    2014-01-01

    A method was developed to distinguish Vaccinium species based on leaf extracts using nuclear magnetic resonance spectroscopy. Reference spectra were measured on leaf extracts from several species, including lowbush blueberry (Vaccinium angustifolium), oval leaf huckleberry (Vaccinium ovalifolium), and cranberry (Vaccinium macrocarpon). Using principal component analysis, these leaf extracts were resolved in the scores plot. Analysis of variance statistical tests demonstrated that the three groups differ significantly on PC2, establishing that the three species can be distinguished by nuclear magnetic resonance. Soft independent modeling of class analogies models for each species also showed discrimination between species. To demonstrate the robustness of nuclear magnetic resonance spectroscopy for botanical identification, spectra of a sample of lowbush blueberry leaf extract were measured at five different sites, with different field strengths (600 versus 700 MHz), different probe types (cryogenic versus room temperature probes), different sample diameters (1.7 mm versus 5 mm), and different consoles (Avance I versus Avance III). Each laboratory independently demonstrated the linearity of their NMR measurements by acquiring a standard curve for chlorogenic acid (R2 = 0.9782 to 0.9998). Spectra acquired on different spectrometers at different sites classifed into the expected group for the Vaccinium spp., confirming the utility of the method to distinguish Vaccinium species and demonstrating nuclear magnetic resonance fingerprinting for material validation of a natural health product. PMID:24963620

  12. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    PubMed

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Exploring the construct validity of the social cognition and object relations scale in a clinical sample.

    PubMed

    Stein, Michelle B; Slavin-Mulford, Jenelle; Sinclair, S Justin; Siefert, Caleb J; Blais, Mark A

    2012-01-01

    The Social Cognition and Object Relations Scale-Global rating method (SCORS-G; Stein, Hilsenroth, Slavin-Mulford, & Pinsker, 2011; Westen, 1995) measures the quality of object relations in narrative material. This study employed a multimethod approach to explore the structure and construct validity of the SCORS-G. The Thematic Apperception Test (TAT; Murray, 1943) was administered to 59 patients referred for psychological assessment at a large Northeastern U.S. hospital. The resulting 301 TAT narratives were rated using the SCORS-G method. The 8 SCORS variables were found to have high interrater reliability and good internal consistency. Principal components analysis revealed a 3-component solution with components tapping emotions/affect regulation in relationships, self-image, and aspects of cognition. Next, the construct validity of the SCORS-G components was explored using measures of intellectual and executive functioning, psychopathology, and normal personality. The 3 SCORS-G components showed unique and theoretically meaningful relationships across these broad and diverse psychological measures. This study demonstrates the value of using a standardized scoring method, like the SCORS-G, to reveal the rich and complex nature of narrative material.

  14. Errors in reporting on dissolution research: methodological and statistical implications.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-02-01

    In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.

  15. Determination of the absorption coefficient of chromophoric dissolved organic matter from underway spectrophotometry.

    PubMed

    Dall'Olmo, Giorgio; Brewin, Robert J W; Nencioli, Francesco; Organelli, Emanuele; Lefering, Ina; McKee, David; Röttgers, Rüdiger; Mitchell, Catherine; Boss, Emmanuel; Bricaud, Annick; Tilstone, Gavin

    2017-11-27

    Measurements of the absorption coefficient of chromophoric dissolved organic matter (ay) are needed to validate existing ocean-color algorithms. In the surface open ocean, these measurements are challenging because of low ay values. Yet, existing global datasets demonstrate that ay could contribute between 30% to 50% of the total absorption budget in the 400-450 nm spectral range, thus making accurate measurement of ay essential to constrain these uncertainties. In this study, we present a simple way of determining ay using a commercially-available in-situ spectrophotometer operated in underway mode. The obtained ay values were validated using independent collocated measurements. The method is simple to implement, can provide measurements with very high spatio-temporal resolution, and has an accuracy of about 0.0004 m -1 and a precision of about 0.0025 m -1 when compared to independent data (at 440 nm). The only limitation for using this method at sea is that it relies on the availability of relatively large volumes of ultrapure water. Despite this limitation, the method can deliver the ay data needed for validating and assessing uncertainties in ocean-colour algorithms.

  16. Validation of the Abdominal Pain Index Using a Revised Scoring Method

    PubMed Central

    Sherman, Amanda L.; Smith, Craig A.; Walker, Lynn S.

    2015-01-01

    Objective Evaluate the psychometric properties of child- and parent-report versions of the four-item Abdominal Pain Index (API) in children with functional abdominal pain (FAP) and healthy controls, using a revised scoring method that facilitates comparisons of scores across samples and time. Methods Pediatric patients aged 8–18 years with FAP and controls completed the API at baseline (N = 1,967); a subset of their parents (N = 290) completed the API regarding the child’s pain. Subsets of patients completed follow-up assessments at 2 weeks (N = 231), 3 months (N = 330), and 6 months (N = 107). Subsets of both patients (N = 389) and healthy controls (N = 172) completed a long-term follow-up assessment (mean age at follow-up = 20.21 years, SD = 3.75). Results The API demonstrated good concurrent, discriminant, and construct validity, as well as good internal consistency. Conclusion We conclude that the API, using the revised scoring method, is a useful, reliable, and valid measure of abdominal pain severity. PMID:25617048

  17. Construction concepts and validation of the 3D printed UST_2 modular stellarator

    NASA Astrophysics Data System (ADS)

    Queral, V.

    2015-03-01

    High accuracy, geometric complexity and thus high cost of stellarators tend to hinder the advance of stellarator research. Nowadays, new manufacturing methods might be developed for the production of small and middle-size stellarators. The methods should demonstrate advantages with respect common fabrication methods, like casting, cutting, forging and welding, for the construction of advanced highly convoluted modular stellarators. UST2 is a small modular three period quasi-isodynamic stellarator of major radius 0.26 m and plasma volume 10 litres being currently built to validate additive manufacturing (3D printing) for stellarator construction. The modular coils are wound in grooves defined on six 3D printed half period frames designed as light truss structures filled by a strong filler. A geometrically simple assembling configuration has been concocted for UST2 so as to try to lower the cost of the device while keeping the positioning accuracy of the different elements. The paper summarizes the construction and assembling concepts developed, the devised positioning methodology, the design of the coil frames and positioning elements and, an initial validation of the assembling of the components.

  18. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  19. Three-parameter error analysis method based on rotating coordinates in rotating birefringent polarizer system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Junjie; Jia, Hongzhi, E-mail: hzjia@usst.edu.cn

    2015-11-15

    We propose error analysis using a rotating coordinate system with three parameters of linearly polarized light—incidence angle, azimuth angle on the front surface, and angle between the incidence and vibration planes—and demonstrate the method on a rotating birefringent prism system. The transmittance and angles are calculated plane-by-plane using a birefringence ellipsoid model and the final transmitted intensity equation is deduced. The effects of oblique incidence, light interference, beam convergence, and misalignment of the rotation and prism axes are discussed. We simulate the entire error model using MATLAB and conduct experiments based on a built polarimeter. The simulation and experimental resultsmore » are consistent and demonstrate the rationality and validity of this method.« less

  20. A method for reducing the order of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.

    1984-06-01

    An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.

  1. A weight modification sequential method for VSC-MTDC power system state estimation

    NASA Astrophysics Data System (ADS)

    Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng

    2017-06-01

    This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.

  2. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons

    PubMed Central

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured. PMID:29527161

  3. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons.

    PubMed

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.

  4. Validation of a Measure of Normative Beliefs About Smokeless Tobacco Use

    PubMed Central

    O’Connor, Richard J.; Bansal-Travers, Maansi; Cummings, K. Michael; Rees, Vaughan W.; Hatsukami, Dorothy K.

    2016-01-01

    Abstract Introduction: Validated methods to evaluate consumer responses to modified risk tobacco products (MRTPs) are needed. Guided by existing literature that demonstrates a relationship between normative beliefs and future intentions to use tobacco the current research sought to (1) develop a measure of normative beliefs about smokeless tobacco (ST) and establish the underlying factor structure, (2) evaluate the structure with confirmatory factor analysis utilizing an independent sample of youth, and (3) establish the measure’s concurrent validity. Methods: Respondents (smokers and nonsmokers aged 15–65; N = 2991) completed a web-based survey that included demographic characteristics, tobacco use history and dependence, and a measure of attitudes about ST adapted from the Normative Beliefs about Smoking scale. A second sample of youth (aged 14–17; N = 305) completed a similar questionnaire. Results: Exploratory factor analysis produced the anticipated three-factor solution and accounted for nearly three-quarters of the variance in the data reflecting (1) perceived prevalence of ST use, (2) popularity of ST among successful/elite, and (3) approval of ST use by parents/peers. Confirmatory factor analysis with data from the youth sample demonstrated good model fit. Logistic regression demonstrated that the scales effectively discriminate between ST users and nonusers and are associated with interest in trying snus. Conclusions: Assessment of MRTPs for regulatory purposes, which allows messages of reduced risk, should include measurement of social norms. Furthermore, surveillance efforts that track use of new MRTPs should include measures of social norms to determine how norms change with prevalence of use. PMID:26187390

  5. Structured surface reflector design for oblique incidence beam splitter at 610 GHz.

    PubMed

    Defrance, F; Casaletti, M; Sarrazin, J; Wiedner, M C; Gibson, H; Gay, G; Lefèvre, R; Delorme, Y

    2016-09-05

    An iterative alternate projection-based algorithm is developed to design structured surface reflectors to operate as beam splitters at GHz and THz frequencies. To validate the method, a surface profile is determined to achieve a reflector at 610 GHz that generates four equal-intensity beams towards desired directions of ±12.6° with respect to the specular reflection axis. A prototype is fabricated and the beam splitter behavior is experimentally demonstrated. Measurements confirm a good agreement (within 1%) with computer simulations using Feko, validating the method. The beam splitter at 610 GHz has a measured efficiency of 78% under oblique incidence illumination that ensures a similar intensity between the four reflected beams (variation of about 1%).

  6. Cognitive assessment in mathematics with the least squares distance method.

    PubMed

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  7. A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.

    PubMed

    Cai, Binghuang; Jiang, Xia

    2014-04-01

    Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Determination of Yohimbine in Yohimbe Bark and Related Dietary Supplements Using UHPLC-UV/MS: Single-Laboratory Validation.

    PubMed

    Chen, Pei; Bryden, Noella

    2015-01-01

    A single-laboratory validation was performed on a practical ultra-HPLC (UHPLC)-diode array detector (DAD)/tandem MS method for determination of yohimbine in yohimbe barks and related dietary supplements. Good separation was achieved using a Waters Acquity ethylene bridged hybrid C18 column with gradient elution using 0.1% (v/v) aqueous ammonium hydroxide and 0.1% ammonium hydroxide in methanol as the mobile phases. The method can separate corynanthine from yohimbine in yohimbe bark extract, which is critical for accurate quantitation of yohimbine in yohimbe bark and related dietary supplements. Accuracy of the method was demonstrated using standard addition methods. Both intraday and interday precisions of the method were good. The method can be used without MS since yohimbine concentration in yohimbe barks and related dietary supplements are usually high enough for DAD detection, which can make it an easy and economical method for routine analysis of yohimbe barks and related dietary supplements. On the other hand, the method can be used with MS if desired for more challenging work such as biological and/or clinical studies.

  9. Developing 3D microscopy with CLARITY on human brain tissue: Towards a tool for informing and validating MRI-based histology.

    PubMed

    Morawski, Markus; Kirilina, Evgeniya; Scherf, Nico; Jäger, Carsten; Reimann, Katja; Trampel, Robert; Gavriilidis, Filippos; Geyer, Stefan; Biedermann, Bernd; Arendt, Thomas; Weiskopf, Nikolaus

    2017-11-28

    Recent breakthroughs in magnetic resonance imaging (MRI) enabled quantitative relaxometry and diffusion-weighted imaging with sub-millimeter resolution. Combined with biophysical models of MR contrast the emerging methods promise in vivo mapping of cyto- and myelo-architectonics, i.e., in vivo histology using MRI (hMRI) in humans. The hMRI methods require histological reference data for model building and validation. This is currently provided by MRI on post mortem human brain tissue in combination with classical histology on sections. However, this well established approach is limited to qualitative 2D information, while a systematic validation of hMRI requires quantitative 3D information on macroscopic voxels. We present a promising histological method based on optical 3D imaging combined with a tissue clearing method, Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging compatible Tissue hYdrogel (CLARITY), adapted for hMRI validation. Adapting CLARITY to the needs of hMRI is challenging due to poor antibody penetration into large sample volumes and high opacity of aged post mortem human brain tissue. In a pilot experiment we achieved transparency of up to 8 mm-thick and immunohistochemical staining of up to 5 mm-thick post mortem brain tissue by a combination of active and passive clearing, prolonged clearing and staining times. We combined 3D optical imaging of the cleared samples with tailored image processing methods. We demonstrated the feasibility for quantification of neuron density, fiber orientation distribution and cell type classification within a volume with size similar to a typical MRI voxel. The presented combination of MRI, 3D optical microscopy and image processing is a promising tool for validation of MRI-based microstructure estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Robotic suturing on the FLS model possesses construct validity, is less physically demanding, and is favored by more surgeons compared with laparoscopy.

    PubMed

    Stefanidis, Dimitrios; Hope, William W; Scott, Daniel J

    2011-07-01

    The value of robotic assistance for intracorporeal suturing is not well defined. We compared robotic suturing with laparoscopic suturing on the FLS model with a large cohort of surgeons. Attendees (n=117) at the SAGES 2006 Learning Center robotic station placed intracorporeal sutures on the FLS box-trainer model using conventional laparoscopic instruments and the da Vinci® robot. Participant performance was recorded using a validated objective scoring system, and a questionnaire regarding demographics, task workload, and suturing modality preference was completed. Construct validity for both tasks was assessed by comparing the performance scores of subjects with various levels of experience. A validated questionnaire was used for workload measurement. Of the participants, 84% had prior laparoscopic and 10% prior robotic suturing experience. Within the allotted time, 83% of participants completed the suturing task laparoscopically and 72% with the robot. Construct validity was demonstrated for both simulated tasks according to the participants' advanced laparoscopic experience, laparoscopic suturing experience, and self-reported laparoscopic suturing ability (p<0.001 for all) and according to prior robotic experience, robotic suturing experience, and self-reported robotic suturing ability (p<0.001 for all), respectively. While participants achieved higher suturing scores with standard laparoscopy compared with the robot (84±75 vs. 56±63, respectively; p<0.001), they found the laparoscopic task more physically demanding (NASA score 13±5 vs. 10±5, respectively; p<0.001) and favored the robot as their method of choice for intracorporeal suturing (62 vs. 38%, respectively; p<0.01). Construct validity was demonstrated for robotic suturing on the FLS model. Suturing scores were higher using standard laparoscopy likely as a result of the participants' greater experience with laparoscopic suturing versus robotic suturing. Robotic assistance decreases the physical demand of intracorporeal suturing compared with conventional laparoscopy and, in this study, was the preferred suturing method by most surgeons. Curricula for robotic suturing training need to be developed.

  11. Development and validation of polar RP-HPLC method for screening for ectoine high-yield strains in marine bacteria with green chemistry.

    PubMed

    Chen, Jun; Chen, Jianwei; Wang, Sijia; Zhou, Guangmin; Chen, Danqing; Zhang, Huawei; Wang, Hong

    2018-04-02

    A novel, green, rapid, and precise polar RP-HPLC method has been successfully developed and screened for ectoine high-yield strain in marine bacteria. Ectoine is a polar and extremely useful solute which allows microorganisms to survive in extreme environmental salinity. This paper describes a polar-HPLC method employed polar RP-C18 (5 μm, 250 × 4.6 mm) using pure water as the mobile phase and a column temperature of 30 °C, coupled with a flow rate at 1.0 mL/min and detected under a UV detector at wavelength of 210 nm. Our method validation demonstrates excellent linearity (R 2  = 0.9993), accuracy (100.55%), and a limit of detection LOQ and LOD of 0.372 and 0.123 μgmL -1 , respectively. These results clearly indicate that the developed polar RP-HPLC method for the separation and determination of ectoine is superior to earlier protocols.

  12. Multiplex cDNA quantification method that facilitates the standardization of gene expression data

    PubMed Central

    Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira

    2011-01-01

    Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008

  13. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  14. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  15. Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.

    PubMed

    Morgeaux, S; Variot, P; Daas, A; Costanzo, A

    2013-01-01

    The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).

  16. Validated UV-spectrophotometric method for the evaluation of the efficacy of makeup remover.

    PubMed

    Charoennit, P; Lourith, N

    2012-04-01

    A UV-spectrophotometric method for the analysis of makeup remover was developed and validated according to ICH guidelines. Three makeup removers for which the main ingredients consisted of vegetable oil (A), mineral oil and silicone (B) and mineral oil and water (C) were sampled in this study. Ethanol was the optimal solvent because it did not interfere with the maximum absorbance of the liquid foundation at 250 nm. The linearity was determined over a range of makeup concentrations from 0.540 to 1.412 mg mL⁻¹ (R² = 0.9977). The accuracy of this method was determined by analysing low, intermediate and high concentrations of the liquid foundation and gave 78.59-91.57% recoveries with a relative standard deviation of <2% (0.56-1.45%). This result demonstrates the validity and reliability of this method. The reproducibilities were 97.32 ± 1.79, 88.34 ± 2.69 and 95.63 ± 2.94 for preparations A, B and C respectively, which are within the acceptable limits set forth by the ASEAN analytical validation guidelines, which ensure the precision of the method under the same operating conditions over a short time interval and the inter-assay precision within the laboratory. The proposed method is therefore a simple, rapid, accurate, precise and inexpensive technique for the routine analysis of makeup remover efficacy. © 2011 The Authors. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  17. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    PubMed

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  18. Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.

    PubMed

    Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans

    2018-06-01

    In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.

  19. In vivo measurement of aerodynamic weight support in freely flying birds

    NASA Astrophysics Data System (ADS)

    Lentink, David; Haselsteiner, Andreas; Ingersoll, Rivers

    2014-11-01

    Birds dynamically change the shape of their wing during the stroke to support their body weight aerodynamically. The wing is partially folded during the upstroke, which suggests that the upstroke of birds might not actively contribute to aerodynamic force production. This hypothesis is supported by the significant mass difference between the large pectoralis muscle that powers the down-stroke and the much smaller supracoracoideus that drives the upstroke. Previous works used indirect or incomplete techniques to measure the total force generated by bird wings ranging from muscle force, airflow, wing surface pressure, to detailed kinematics measurements coupled with bird mass-distribution models to derive net force through second derivatives. We have validated a new method that measures aerodynamic force in vivo time-resolved directly in freely flying birds which can resolve this question. The validation of the method, using independent force measurements on a quadcopter with pulsating thrust, show the aerodynamic force and impulse are measured within 2% accuracy and time-resolved. We demonstrate results for quad-copters and birds of similar weight and size. The method is scalable and can be applied to both engineered and natural flyers across taxa. The first author invented the method, the second and third authors validated the method and present results for quadcopters and birds.

  20. Monitoring of platinum surface contamination in seven Dutch hospital pharmacies using inductively coupled plasma mass spectrometry

    PubMed Central

    Huitema, A. D. R.; Bakker, E. N.; Douma, J. W.; Schimmel, K. J. M.; van Weringh, G.; de Wolf, P. J.; Schellens, J. H. M.; Beijnen, J. H.

    2007-01-01

    Objective: To develop, validate, and apply a method for the determination of platinum contamination, originating from cisplatinum, oxaliplatinum, and carboplatinum. Methods: Inductively coupled plasma mass spectrometry (ICP-MS) was used to determine platinum in wipe samples. The sampling procedure and the analytical conditions were optimised and the assay was validated. The method was applied to measure surface contamination in seven Dutch hospital pharmacies. Results: The developed method allowed reproducible quantification of 0.50 ng l−1 platinum (5 pg/wipe sample). Recoveries for stainless steel and linoleum surfaces ranged between 50.4 and 81.4% for the different platinum compounds tested. Platinum contamination was reported in 88% of the wipe samples. Although a substantial variation in surface contamination of the pharmacies was noticed, in most pharmacies, the laminar-airflow (LAF) hoods, the floor in front of the LAF hoods, door handles, and handles of service hatches showed positive results. This demonstrates that contamination is spread throughout the preparation rooms. Conclusion: We developed and validated an ultra sensitive and reliable ICP-MS method for the determination of platinum in surface samples. Surface contamination with platinum was observed in all hospital pharmacies sampled. The interpretation of these results is, however, complicated. PMID:17377802

  1. Task-oriented evaluation of electronic medical records systems: development and validation of a questionnaire for physicians.

    PubMed

    Laerum, Hallvard; Faxvaag, Arild

    2004-02-09

    Evaluation is a challenging but necessary part of the development cycle of clinical information systems like the electronic medical records (EMR) system. It is believed that such evaluations should include multiple perspectives, be comparative and employ both qualitative and quantitative methods. Self-administered questionnaires are frequently used as a quantitative evaluation method in medical informatics, but very few validated questionnaires address clinical use of EMR systems. We have developed a task-oriented questionnaire for evaluating EMR systems from the clinician's perspective. The key feature of the questionnaire is a list of 24 general clinical tasks. It is applicable to physicians of most specialties and covers essential parts of their information-oriented work. The task list appears in two separate sections, about EMR use and task performance using the EMR, respectively. By combining these sections, the evaluator may estimate the potential impact of the EMR system on health care delivery. The results may also be compared across time, site or vendor. This paper describes the development, performance and validation of the questionnaire. Its performance is shown in two demonstration studies (n = 219 and 80). Its content is validated in an interview study (n = 10), and its reliability is investigated in a test-retest study (n = 37) and a scaling study (n = 31). In the interviews, the physicians found the general clinical tasks in the questionnaire relevant and comprehensible. The tasks were interpreted concordant to their definitions. However, the physicians found questions about tasks not explicitly or only partially supported by the EMR systems difficult to answer. The two demonstration studies provided unambiguous results and low percentages of missing responses. In addition, criterion validity was demonstrated for a majority of task-oriented questions. Their test-retest reliability was generally high, and the non-standard scale was found symmetric and ordinal. This questionnaire is relevant for clinical work and EMR systems, provides reliable and interpretable results, and may be used as part of any evaluation effort involving the clinician's perspective of an EMR system.

  2. Crack Detection in Fibre Reinforced Plastic Structures Using Embedded Fibre Bragg Grating Sensors: Theory, Model Development and Experimental Validation

    PubMed Central

    Pereira, G. F.; Mikkelsen, L. P.; McGugan, M.

    2015-01-01

    In a fibre-reinforced polymer (FRP) structure designed using the emerging damage tolerance and structural health monitoring philosophy, sensors and models that describe crack propagation will enable a structure to operate despite the presence of damage by fully exploiting the material’s mechanical properties. When applying this concept to different structures, sensor systems and damage types, a combination of damage mechanics, monitoring technology, and modelling is required. The primary objective of this article is to demonstrate such a combination. This article is divided in three main topics: the damage mechanism (delamination of FRP), the structural health monitoring technology (fibre Bragg gratings to detect delamination), and the finite element method model of the structure that incorporates these concepts into a final and integrated damage-monitoring concept. A novel method for assessing a crack growth/damage event in fibre-reinforced polymer or structural adhesive-bonded structures using embedded fibre Bragg grating (FBG) sensors is presented by combining conventional measured parameters, such as wavelength shift, with parameters associated with measurement errors, typically ignored by the end-user. Conjointly, a novel model for sensor output prediction (virtual sensor) was developed using this FBG sensor crack monitoring concept and implemented in a finite element method code. The monitoring method was demonstrated and validated using glass fibre double cantilever beam specimens instrumented with an array of FBG sensors embedded in the material and tested using an experimental fracture procedure. The digital image correlation technique was used to validate the model prediction by correlating the specific sensor response caused by the crack with the developed model. PMID:26513653

  3. Psychometric Arabic Sino-Nasal Outcome Test-22: validation and translation in chronic rhinosinusitis patients.

    PubMed

    Alanazy, Fatma; Dousary, Surayie Al; Albosaily, Ahmed; Aldriweesh, Turki; Alsaleh, Saad; Aldrees, Turki

    2018-01-01

    The Sino-Nasal Outcome Test (SNOT)-22 has multiple items that reflect how nasal disease affects quality of life. Currently, no validated Arabic version of the SNOT-22 is available. . To develop an Arabic-validated version of SNOT-22. Prospective. Tertiary care center. This single-center validation study was conducted between 2015 and 2017 at King Abdul-Aziz University Hospital, Riyadh, Saudi Arabia. The SNOT-22 English version was translated into Arabic by the forward and backward method. The test and retest reliability, internal consistency, responsiveness to surgical treatment, discriminant validity, sensitivity and specificity all were tested. Validated Arabic version of the SNOT-22. Of 265 individuals, 171 were healthy volunteers and 94 were chronic rhinosinusitis patients. The Arabic version showed high internal consistency (Cronbach's of 0.94), and the ability to differentiate between diseased and healthy volunteers (P < .001). The translated versions demonstrated the ability to detect the change scores significantly in response to intervention (P < .001). This is the first validated Arabic version of SNOT-22. The instrument can be used among the Arabic population. No subjects from other Arab countries.

  4. Multiyear Plan for Validation of EnergyPlus Multi-Zone HVAC System Modeling using ORNL's Flexible Research Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan

    This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less

  5. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  6. Evaluation of the Thermo Scientific SureTect Listeria monocytogenes Assay.

    PubMed

    Cloke, Jonathan; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Hopper, Craig; Simpson, Helen; Withey, Sophie; Oleksiuk, Milena; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-01-01

    The Thermo Scientific SureTect Listeria monocytogenes Assay is a new real-time PCR assay for the detection of Listeria monocytogenes in food and environmental samples. This assay was validated using the AOAC Research Institute (AOAC-RI) Performance Tested Methods program in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996, including Amendment 1:2004 with the following foods and food contact surfaces: smoked salmon, processed cheese, fresh bagged spinach, fresh cantaloupe, cooked prawns (chilled product), cooked sliced turkey meat (chilled product), ice cream, pork frankfurters, salami, ground raw beef meat (12% fat), plastic, and stainless steel. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, bagged lettuce, and stainless steel) were analyzed independently as part of the AOAC-RI controlled laboratory study by the University of Guelph, Canada. Using probability of detection (POD) statistical analysis, a significant difference was demonstrated between the candidate and reference methods for salami, cooked sliced turkey and ice cream in favor of the SureTect assay. For all other matrixes, no significant difference by POD was seen between the two methods during the study. Inclusivity and exclusivity testing was also conducted with 53 and 30 isolates, respectively, which demonstrated that the SureTect assay was able to detect all serotypes of L. monocytogenes. None of the exclusivity isolates analyzed were detected by the SureTect assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside the recommended parameters open to variation, i.e., enrichment time and temperature and lysis temperature, which demonstrated that the assay gave reliable performance. Accelerated stability testing was also conducted, validating the assay shelf life.

  7. A Comparison of Reliability and Construct Validity between the Original and Revised Versions of the Rosenberg Self-Esteem Scale

    PubMed Central

    Nahathai, Wongpakaran

    2012-01-01

    Objective The Rosenberg Self-Esteem Scale (RSES) is a widely used instrument that has been tested for reliability and validity in many settings; however, some negative-worded items appear to have caused it to reveal low reliability in a number of studies. In this study, we revised one negative item that had previously (from the previous studies) produced the worst outcome in terms of the structure of the scale, then re-analyzed the new version for its reliability and construct validity, comparing it to the original version with respect to fit indices. Methods In total, 851 students from Chiang Mai University (mean age: 19.51±1.7, 57% of whom were female), participated in this study. Of these, 664 students completed the Thai version of the original RSES - containing five positively worded and five negatively worded items, while 187 students used the revised version containing six positively worded and four negatively worded items. Confirmatory factor analysis was applied, using a uni-dimensional model with method effects and a correlated uniqueness approach. Results The revised version showed the same level of reliability (good) as the original, but yielded a better model fit. The revised RSES demonstrated excellent fit statistics, with χ2=29.19 (df=19, n=187, p=0.063), GFI=0.970, TFI=0.969, NFI=0.964, CFI=0.987, SRMR=0.040 and RMSEA=0.054. Conclusion The revised version of the Thai RSES demonstrated an equivalent level of reliability but a better construct validity when compared to the original. PMID:22396685

  8. Updated Systematic Review and Meta-Analysis of the Performance of Risk Prediction Rules in Children and Young People with Febrile Neutropenia

    PubMed Central

    Phillips, Robert S.; Lehrnbecher, Thomas; Alexander, Sarah; Sung, Lillian

    2012-01-01

    Introduction Febrile neutropenia is a common and potentially life-threatening complication of treatment for childhood cancer, which has increasingly been subject to targeted treatment based on clinical risk stratification. Our previous meta-analysis demonstrated 16 rules had been described and 2 of them subject to validation in more than one study. We aimed to advance our knowledge of evidence on the discriminatory ability and predictive accuracy of such risk stratification clinical decision rules (CDR) for children and young people with cancer by updating our systematic review. Methods The review was conducted in accordance with Centre for Reviews and Dissemination methods, searching multiple electronic databases, using two independent reviewers, formal critical appraisal with QUADAS and meta-analysis with random effects models where appropriate. It was registered with PROSPERO: CRD42011001685. Results We found 9 new publications describing a further 7 new CDR, and validations of 7 rules. Six CDR have now been subject to testing across more than two data sets. Most validations demonstrated the rule to be less efficient than when initially proposed; geographical differences appeared to be one explanation for this. Conclusion The use of clinical decision rules will require local validation before widespread use. Considerable uncertainty remains over the most effective rule to use in each population, and an ongoing individual-patient-data meta-analysis should develop and test a more reliable CDR to improve stratification and optimise therapy. Despite current challenges, we believe it will be possible to define an internationally effective CDR to harmonise the treatment of children with febrile neutropenia. PMID:22693615

  9. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  10. LC-MS/MS determination of 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279) in dog plasma with high-throughput protein precipitation sample preparation.

    PubMed

    Kim, Joseph; Flick, Jeanette; Reimer, Michael T; Rodila, Ramona; Wang, Perry G; Zhang, Jun; Ji, Qin C; El-Shourbagy, Tawakol A

    2007-11-01

    As an effective DPP-IV inhibitor, 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279), is an investigational drug candidate under development at Abbott Laboratories for potential treatment of type 2 diabetes. In order to support the development of ABT-279, multiple analytical methods for an accurate, precise and selective concentration determination of ABT-279 in different matrices were developed and validated in accordance with the US Food and Drug Administration Guidance on Bioanalytical Method Validation. The analytical method for ABT-279 in dog plasma was validated in parallel to other validations for ABT-279 determination in different matrices. In order to shorten the sample preparation time and increase method precision, an automated multi-channel liquid handler was used to perform high-throughput protein precipitation and all other liquid transfers. The separation was performed through a Waters YMC ODS-AQ column (2.0 x 150 mm, 5 microm, 120 A) with a mobile phase of 20 mm ammonium acetate in 20% acetonitrile at a flow rate of 0.3 mL/min. Data collection started at 2.2 min and continued for 2.0 min. The validated linear dynamic range in dog plasma was between 3.05 and 2033.64 ng/mL using a 50 microL sample volume. The achieved r(2) coefficient of determination from three consecutive runs was between 0.998625 and 0.999085. The mean bias was between -4.1 and 4.3% for all calibration standards including lower limit of quantitation. The mean bias was between -8.0 and 0.4% for the quality control samples. The precision, expressed as a coefficient of variation (CV), was < or =4.1% for all levels of quality control samples. The validation results demonstrated that the high-throughput method was accurate, precise and selective for the determination of ABT-279 in dog plasma. The validated method was also employed to support two toxicology studies. The passing rate was 100% for all 49 runs from one validation study and two toxicology studies. Copyright (c) 2007 John Wiley & Sons, Ltd.

  11. Electron Beam-Cure Polymer Matrix Composites: Processing and Properties

    NASA Technical Reports Server (NTRS)

    Wrenn, G.; Frame, B.; Jensen, B.; Nettles, A.

    2001-01-01

    Researchers from NASA and Oak Ridge National Laboratory are evaluating a series of electron beam curable composites for application in reusable launch vehicle airframe and propulsion systems. Objectives are to develop electron beam curable composites that are useful at cryogenic to elevated temperatures (-217 C to 200 C), validate key mechanical properties of these composites, and demonstrate cost-saving fabrication methods at the subcomponent level. Electron beam curing of polymer matrix composites is an enabling capability for production of aerospace structures in a non-autoclave process. Payoffs of this technology will be fabrication of composite structures at room temperature, reduced tooling cost and cure time, and improvements in component durability. This presentation covers the results of material property evaluations for electron beam-cured composites made with either unidirectional tape or woven fabric architectures. Resin systems have been evaluated for performance in ambient, cryogenic, and elevated temperature conditions. Results for electron beam composites and similar composites cured in conventional processes are reviewed for comparison. Fabrication demonstrations were also performed for electron beam-cured composite airframe and propulsion piping subcomponents. These parts have been built to validate manufacturing methods with electron beam composite materials, to evaluate electron beam curing processing parameters, and to demonstrate lightweight, low-cost tooling options.

  12. A Huygens immersed-finite-element particle-in-cell method for modeling plasma-surface interactions with moving interface

    NASA Astrophysics Data System (ADS)

    Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao

    2018-06-01

    Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.

  13. Evaluating Mixed Research Studies: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Leech, Nancy L.; Dellinger, Amy B.; Brannagan, Kim B.; Tanaka, Hideyuki

    2010-01-01

    The purpose of this article is to demonstrate application of a new framework, the validation framework (VF), to assist researchers in evaluating mixed research studies. Based on an earlier work by Dellinger and Leech, a description of the VF is delineated. Using the VF, three studies from education, health care, and counseling fields are…

  14. The Development of Accepted Performance Items to Demonstrate Competence in Literary Braille

    ERIC Educational Resources Information Center

    Lewis, Sandra; D'Andrea, Frances Mary; Rosenblum, L. Penny

    2012-01-01

    Introduction: This research attempted to establish the content validity of several performance statements that are associated with basic knowledge, production, and reading of braille by beginning teachers. Methods: University instructors (n = 21) and new teachers of students with visual impairments (n = 20) who had taught at least 2 braille…

  15. [Closed needle-biopsy in the diagnosis of neoplasms].

    PubMed

    Sforza, M; Perelli Ercolini, M; Beani, G

    1979-04-01

    The AA. demonstrate with this communication the validity of the needle biopsie for the diagnosis of neoplasms. They had used it for the breast, thyroid, flg and some other superficial tumefactions. In the mass-screening for the feminine neoplasms the clinical examination and the needle biopsy are very good method for a careful diagnosis.

  16. Full immersion simulation: validation of a distributed simulation environment for technical and non-technical skills training in Urology.

    PubMed

    Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter

    2015-07-01

    To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  17. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  18. Comparison between Inbreeding Analyses Methodologies.

    PubMed

    Esparza, Mireia; Martínez-Abadías, Neus; Sjøvold, Torstein; González-José, Rolando; Hernández, Miquel

    2015-12-01

    Surnames are widely used in inbreeding analysis, but the validity of results has often been questioned due to the failure to comply with the prerequisites of the method. Here we analyze inbreeding in Hallstatt (Austria) between the 17th and the 19th centuries both using genealogies and surnames. The high and significant correlation of the results obtained by both methods demonstrates the validity of the use of surnames in this kind of studies. On the other hand, the inbreeding values obtained (0.24 x 10⁻³ in the genealogies analysis and 2.66 x 10⁻³ in the surnames analysis) are lower than those observed in Europe for this period and for this kind of population, demonstrating the falseness of the apparent isolation of Hallstatt's population. The temporal trend of inbreeding in both analyses does not follow the European general pattern, but shows a maximum in 1850 with a later decrease along the second half of the 19th century. This is probably due to the high migration rate that is implied by the construction of transport infrastructures around the 1870's.

  19. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less

  20. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.

  1. Ecological Validity and Clinical Utility of Patient-Reported Outcomes Measurement Information System (PROMIS®) instruments for detecting premenstrual symptoms of depression, anger, and fatigue

    PubMed Central

    Junghaenel, Doerte U.; Schneider, Stefan; Stone, Arthur A.; Christodoulou, Christopher; Broderick, Joan E.

    2014-01-01

    Objective This study examined the ecological validity and clinical utility of NIH Patient Reported-Outcomes Measurement Information System (PROMIS®) instruments for anger, depression, and fatigue in women with premenstrual symptoms. Methods One-hundred women completed daily diaries and weekly PROMIS assessments over 4 weeks. Weekly assessments were administered through Computerized Adaptive Testing (CAT). Weekly CATs and corresponding daily scores were compared to evaluate ecological validity. To test clinical utility, we examined if CATs could detect changes in symptom levels, if these changes mirrored those obtained from daily scores, and if CATs could identify clinically meaningful premenstrual symptom change. Results PROMIS CAT scores were higher in the pre-menstrual than the baseline (ps < .0001) and post-menstrual (ps < .0001) weeks. The correlations between CATs and aggregated daily scores ranged from .73 to .88 supporting ecological validity. Mean CAT scores showed systematic changes in accordance with the menstrual cycle and the magnitudes of the changes were similar to those obtained from the daily scores. Finally, Receiver Operating Characteristic (ROC) analyses demonstrated the ability of the CATs to discriminate between women with and without clinically meaningful premenstrual symptom change. Conclusions PROMIS CAT instruments for anger, depression, and fatigue demonstrated validity and utility in premenstrual symptom assessment. The results provide encouraging initial evidence of the utility of PROMIS instruments for the measurement of affective premenstrual symptoms. PMID:24630180

  2. Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.

    PubMed

    Ng, Lauren C; Kanyanganzi, Frederick; Munyanah, Morris; Mushashi, Christine; Betancourt, Theresa S

    2014-01-01

    This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R). Qualitative free listing (n = 74) and key informant interviews (n = 47) identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.

  3. Development and Validation of a Fatigue Assessment Scale for U.S. Construction Workers

    PubMed Central

    Zhang, Mingzong; Sparer, Emily H.; Murphy, Lauren A.; Dennerlein, Jack T.; Fang, Dongping; Katz, Jeffrey N.; Caban-Martinez, Alberto J.

    2015-01-01

    Objective To develop a fatigue assessment scale and test its reliability and validity for commercial construction workers. Methods Using a two-phased approach, we first identified items for the development of a Fatigue Assessment Scale for Construction Workers (FASCW) through review of existing scales in the scientific literature, key informant interviews (n=11) and focus groups (3 groups with 6 workers each) with construction workers. The second phase included assessment for the reliability, validity and sensitivity of the new scale using a repeated-measures study design with a convenience sample of construction workers (n=144). Results Phase one resulted in a 16-item preliminary scale that after factor analysis yielded a final 10-item scale with two sub-scales (“Lethargy” and “Bodily Ailment”).. During phase two, the FASCW and its subscales demonstrated satisfactory internal consistency (alpha coefficients were FASCW (0.91), Lethargy (0.86) and Bodily Ailment (0.84)) and acceptable test-retest reliability (Pearson Correlations Coefficients: 0.59–0.68; Intraclass Correlation Coefficients: 0.74–0.80). Correlation analysis substantiated concurrent and convergent validity. A discriminant analysis demonstrated that the FASCW differentiated between groups with arthritis status and different work hours. Conclusions The 10-item FASCW with good reliability and validity is an effective tool for assessing the severity of fatigue among construction workers. PMID:25603944

  4. Development and validation of an HPLC-DAD method for simultaneous determination of cocaine, benzoic acid, benzoylecgonine and the main adulterants found in products based on cocaine.

    PubMed

    Floriani, Gisele; Gasparetto, João Cleverson; Pontarolo, Roberto; Gonçalves, Alan Guilherme

    2014-02-01

    Here, an HPLC-DAD method was developed and validated for simultaneous determination of cocaine, two cocaine degradation products (benzoylecgonine and benzoic acid), and the main adulterants found in products based on cocaine (caffeine, lidocaine, phenacetin, benzocaine and diltiazem). The new method was developed and validated using an XBridge C18 4.6mm×250mm, 5μm particle size column maintained at 60°C. The mobile phase consisted of a gradient of acetonitrile and ammonium formate 0.05M - pH 3.1, eluted at 1.0mL/min. The volume of injection was 10μL and the DAD detector was set at 274nm. Method validation assays demonstrated suitable sensitivity, selectivity, linearity, precision and accuracy. For selectivity assay, a MS detection system could be directly adapted to the method without the need of any change in the chromatographic conditions. The robustness study indicated that the flow rate, temperature and pH of the mobile phase are critical parameters and should not be changed considering the conditions herein determined. The new method was then successfully applied for determining cocaine, benzoylecgonine, benzoic acid, caffeine, lidocaine, phenacetin, benzocaine and diltiazem in 115 samples, seized in Brazil (2007-2012), which consisted of cocaine paste, cocaine base and salt cocaine samples. This study revealed cocaine contents that ranged from undetectable to 97.2%, with 97 samples presenting at least one of the degradation products or adulterants here evaluated. All of the studied degradation products and adulterants were observed among the seized samples, justifying the application of the method, which can be used as a screening and quantification tool in forensic analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. The PedsQL in pediatric cancer: reliability and validity of the Pediatric Quality of Life Inventory Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module.

    PubMed

    Varni, James W; Burwinkle, Tasha M; Katz, Ernest R; Meeske, Kathy; Dickinson, Paige

    2002-04-01

    The Pediatric Quality of Life Inventory (PedsQL) is a modular instrument designed to measure health-related quality of life (HRQOL) in children and adolescents ages 2-18 years. The PedsQL 4.0 Generic Core Scales are multidimensional child self-report and parent proxy-report scales developed as the generic core measure to be integrated with the PedsQL disease specific modules. The PedsQL Multidimensional Fatigue Scale was designed to measure fatigue in pediatric patients. The PedsQL 3.0 Cancer Module was designed to measure pediatric cancer specific HRQOL. The PedsQL Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module were administered to 339 families (220 child self-reports; 337 parent proxy-reports). Internal consistency reliability for the PedsQL Generic Core Total Scale Score (alpha = 0.88 child, 0.93 parent report), Multidimensional Fatigue Total Scale Score (alpha = 0.89 child, 0.92 parent report) and most Cancer Module Scales (average alpha = 0.72 child, 0.87 parent report) demonstrated reliability acceptable for group comparisons. Validity was demonstrated using the known-groups method. The PedsQL distinguished between healthy children and children with cancer as a group, and among children on-treatment versus off-treatment. The validity of the PedsQL Multidimensional Fatigue Scale was further demonstrated through hypothesized intercorrelations with dimensions of generic and cancer specific HRQOL. The results demonstrate the reliability and validity of the PedsQL Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module in pediatric cancer. The PedsQL may be utilized as an outcome measure in clinical trials, research, and clinical practice. Copyright 2002 American Cancer Society.

  6. A Serious Game for Clinical Assessment of Cognitive Status: Validation Study.

    PubMed

    Tong, Tiffany; Chignell, Mark; Tierney, Mary C; Lee, Jacques

    2016-05-27

    We propose the use of serious games to screen for abnormal cognitive status in situations where it may be too costly or impractical to use standard cognitive assessments (eg, emergency departments). If validated, serious games in health care could enable broader availability of efficient and engaging cognitive screening. The objective of this work is to demonstrate the feasibility of a game-based cognitive assessment delivered on tablet technology to a clinical sample and to conduct preliminary validation against standard mental status tools commonly used in elderly populations. We carried out a feasibility study in a hospital emergency department to evaluate the use of a serious game by elderly adults (N=146; age: mean 80.59, SD 6.00, range 70-94 years). We correlated game performance against a number of standard assessments, including the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and the Confusion Assessment Method (CAM). After a series of modifications, the game could be used by a wide range of elderly patients in the emergency department demonstrating its feasibility for use with these users. Of 146 patients, 141 (96.6%) consented to participate and played our serious game. Refusals to play the game were typically due to concerns of family members rather than unwillingness of the patient to play the game. Performance on the serious game correlated significantly with the MoCA (r=-.339, P <.001) and MMSE (r=-.558, P <.001), and correlated (point-biserial correlation) with the CAM (r=.565, P <.001) and with other cognitive assessments. This research demonstrates the feasibility of using serious games in a clinical setting. Further research is required to demonstrate the validity and reliability of game-based assessments for clinical decision making.

  7. Predicting discharge mortality after acute ischemic stroke using balanced data.

    PubMed

    Ho, King Chung; Speier, William; El-Saden, Suzie; Liebeskind, David S; Saver, Jeffery L; Bui, Alex A T; Arnold, Corey W

    2014-01-01

    Several models have been developed to predict stroke outcomes (e.g., stroke mortality, patient dependence, etc.) in recent decades. However, there is little discussion regarding the problem of between-class imbalance in stroke datasets, which leads to prediction bias and decreased performance. In this paper, we demonstrate the use of the Synthetic Minority Over-sampling Technique to overcome such problems. We also compare state of the art machine learning methods and construct a six-variable support vector machine (SVM) model to predict stroke mortality at discharge. Finally, we discuss how the identification of a reduced feature set allowed us to identify additional cases in our research database for validation testing. Our classifier achieved a c-statistic of 0.865 on the cross-validated dataset, demonstrating good classification performance using a reduced set of variables.

  8. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  9. Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule

    NASA Astrophysics Data System (ADS)

    Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong

    2018-06-01

    To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.

  10. Development and Psychometric Evaluation of the HPV Clinical Trial Survey for Parents (CTSP-HPV) Using Traditional Survey Development Methods and Community Engagement Principles.

    PubMed

    Cunningham, Jennifer; Wallston, Kenneth A; Wilkins, Consuelo H; Hull, Pamela C; Miller, Stephania T

    2015-12-01

    This study describes the development and psychometric evaluation of HPV Clinical Trial Survey for Parents with Children Aged 9 to 15 (CTSP-HPV) using traditional instrument development methods and community engagement principles. An expert panel and parental input informed survey content and parents recommended study design changes (e.g., flyer wording). A convenience sample of 256 parents completed the final survey measuring parental willingness to consent to HPV clinical trial (CT) participation and other factors hypothesized to influence willingness (e.g., HPV vaccine benefits). Cronbach's a, Spearman correlations, and multiple linear regression were used to estimate internal consistency, convergent and discriminant validity, and predictively validity, respectively. Internal reliability was confirmed for all scales (a ≥ 0.70.). Parental willingness was positively associated (p < 0.05) with trust in medical researchers, adolescent CT knowledge, HPV vaccine benefits, advantages of adolescent CTs (r range 0.33-0.42), supporting convergent validity. Moderate discriminant construct validity was also demonstrated. Regression results indicate reasonable predictive validity with the six scales accounting for 31% of the variance in parents' willingness. This instrument can inform interventions based on factors that influence parental willingness, which may lead to the eventual increase in trial participation. Further psychometric testing is warranted. © 2015 Wiley Periodicals, Inc.

  11. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  12. Using in Vitro Evolution and Whole Genome Analysis To Discover Next Generation Targets for Antimalarial Drug Discovery

    PubMed Central

    2018-01-01

    Although many new anti-infectives have been discovered and developed solely using phenotypic cellular screening and assay optimization, most researchers recognize that structure-guided drug design is more practical and less costly. In addition, a greater chemical space can be interrogated with structure-guided drug design. The practicality of structure-guided drug design has launched a search for the targets of compounds discovered in phenotypic screens. One method that has been used extensively in malaria parasites for target discovery and chemical validation is in vitro evolution and whole genome analysis (IVIEWGA). Here, small molecules from phenotypic screens with demonstrated antiparasitic activity are used in genome-based target discovery methods. In this Review, we discuss the newest, most promising druggable targets discovered or further validated by evolution-based methods, as well as some exceptions. PMID:29451780

  13. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  14. An interference-based optical authentication scheme using two phase-only masks with different diffraction distances

    NASA Astrophysics Data System (ADS)

    Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang

    2017-02-01

    A new method to eliminate the security risk of the well-known interference-based optical cryptosystem is proposed. In this method, which is suitable for security authentication application, two phase-only masks are separately placed at different distances from the output plane, where a certification image (public image) can be obtained. To further increase the security and flexibility of this authentication system, we employ one more validation image (secret image), which can be observed at another output plane, for confirming the identity of the user. Only if the two correct masks are properly settled at their positions one could obtain two significant images. Besides, even if the legal users exchange their masks (keys), the authentication process will fail and the authentication results will not reveal any information. Numerical simulations are performed to demonstrate the validity and security of the proposed method.

  15. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  16. The validity and reliability of Systemic Lupus Erythematosus Quality of Life Questionnaire (L-QoL) in a Turkish population.

    PubMed

    Duruöz, M T; Unal, C; Toprak, C Sanal; Sezer, I; Yilmaz, F; Ulutatar, F; Atagündüz, P; Baklacioglu, H S

    2017-12-01

    Background Systemic lupus erythematosus (SLE) may have a profound impact on quality of life. There is increasing interest in measuring quality of life in lupus patients. The purpose of this study was to investigate the validity and reliability of SLE Quality of Life Questionnaire (L-QoL) in Turkish SLE patients. Methods SLE according to 2012 Systemic Lupus International Collaborating Clinics Classification Criteria were recruited into the study. Demographic data, clinical parameters and disease activity measured with the Systemic Lupus Erythematosus Disease Activity Index-2000 (SLEDAI-2K); were noted. Nottingham Health Profile and Health Assessment Questionnaire were filled out in addition to the Turkish L-QoL (LQoL-TR). Internal consistency, test-retest reliability, and convergent and discriminant validity were evaluated. Results The mean age of participants was 43.55 ± 14.33 years and the mean disease duration was 89.8 ± 92.1 months. The patients filled out LQoL-TR in 2.5 min. Strong correlation of LQoL-TR with all subgroups of the Nottingham Health Profile and the Health Assessment Questionnaire were established showing the convergent validity. The highest correlation was demonstrated with emotional reactions (rho = 0.72) and sleep component (rho = 0.65) of the Nottingham Health Profile scale ( p < 0.0001). Its poor and not significant correlation with nonfunctional parameters (age, disease duration, perceived general health, SLEDAI-2K) showed its discriminative properties. LQoL-TR demonstrated good internal reliability with a Cronbach's α of 0.93 and test-retest reliability with intraclass correlation coefficient of 0.87. Conclusion The LQoL-TR is a practical and useful tool which demonstrates good validity and reliability.

  17. Evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health: a methodological research programme.

    PubMed

    Dunn, Graham; Emsley, Richard; Liu, Hanhua; Landau, Sabine; Green, Jonathan; White, Ian; Pickles, Andrew

    2015-11-01

    The development of the capability and capacity to evaluate the outcomes of trials of complex interventions is a key priority of the National Institute for Health Research (NIHR) and the Medical Research Council (MRC). The evaluation of complex treatment programmes for mental illness (e.g. cognitive-behavioural therapy for depression or psychosis) not only is a vital component of this research in its own right but also provides a well-established model for the evaluation of complex interventions in other clinical areas. In the context of efficacy and mechanism evaluation (EME) there is a particular need for robust methods for making valid causal inference in explanatory analyses of the mechanisms of treatment-induced change in clinical outcomes in randomised clinical trials. The key objective was to produce statistical methods to enable trial investigators to make valid causal inferences about the mechanisms of treatment-induced change in these clinical outcomes. The primary objective of this report is to disseminate this methodology, aiming specifically at trial practitioners. The three components of the research were (1) the extension of instrumental variable (IV) methods to latent growth curve models and growth mixture models for repeated-measures data; (2) the development of designs and regression methods for parallel trials; and (3) the evaluation of the sensitivity/robustness of findings to the assumptions necessary for model identifiability. We illustrate our methods with applications from psychological and psychosocial intervention trials, keeping the technical details to a minimum, leaving the reporting of the more theoretical and mathematically demanding results for publication in appropriate specialist journals. We show how to estimate treatment effects and introduce methods for EME. We explain the use of IV methods and principal stratification to evaluate the role of putative treatment effect mediators and therapeutic process measures. These results are extended to the analysis of longitudinal data structures. We consider the design of EME trials. We focus on designs to create convincing IVs, bearing in mind assumptions needed to attain model identifiability. A key area of application that has become apparent during this work is the potential role of treatment moderators (predictive markers) in the evaluation of treatment effect mechanisms for personalised therapies (stratified medicine). We consider the role of targeted therapies and multiarm trials and the use of parallel trials to help elucidate the evaluation of mediators working in parallel. In order to demonstrate both efficacy and mechanism, it is necessary to (1) demonstrate a treatment effect on the primary (clinical) outcome, (2) demonstrate a treatment effect on the putative mediator (mechanism) and (3) demonstrate a causal effect from the mediator to the outcome. Appropriate regression models should be applied for (3) or alternative IV procedures, which account for unmeasured confounding, provided that a valid instrument can be identified. Stratified medicine may provide a setting where such instruments can be designed into the trial. This work could be extended by considering improved trial designs, sample size considerations and measurement properties. The project presents independent research funded under the MRC-NIHR Methodology Research Programme (grant reference G0900678).

  18. Improved quality control of [18F]fluoromethylcholine.

    PubMed

    Nader, Michael; Reindl, Dietmar; Eichinger, Reinhard; Beheshti, Mohsen; Langsteger, Werner

    2011-11-01

    With respect to the broad application of [(18)F-methyl]fluorocholine (FCH), there is a need for a safe, but also efficient and convenient way for routine quality control of FCH. Therefore, a GC- method should be developed and validated which allows the simultaneous quantitation of all chemical impurities and residual solvents such as acetonitrile, ethanol, dibromomethane and N,N-dimethylaminoethanol. Analytical GC has been performed with a GC-capillary column Optima 1701 (50 m×0.32 mm), and a pre-column deactivated capillary column phenyl-Sil (10 m×0.32) in line with a flame ionization detector (FID) was used. The validation includes the following tests: specificity, range, accuracy, linearity, precision, limit of detection (LOD) and limit of quantitation (LOQ) of all listed substances. The described GC method has been successfully used for the quantitation of the listed chemical impurities. The specificity of the GC separation has been proven by demonstrating that the appearing peaks are completely separated from each other and that a resolution R≥1.5 for the separation of the peaks could be achieved. The specified range confirmed that the analytical procedure provides an acceptable degree of linearity, accuracy and precision. For each substance, a range from 2% to 120% of the specification limit could be demonstrated. The corresponding LOD values were determined and were much lower than the specification limits. An efficient and convenient GC method for the quality control of FCH has been developed and validated which meets all acceptance criteria in terms of linearity, specificity, precision, accuracy, LOD and LOQ. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Convergent validity of actigraphy with polysomnography and parent reports when measuring sleep in children with Down syndrome.

    PubMed

    Esbensen, A J; Hoffman, E K; Stansberry, E; Shaffer, R

    2018-04-01

    There is a need for rigorous measures of sleep in children with Down syndrome as sleep is a substantial problem in this population and there are barriers to obtaining the gold standard polysomnography (PSG). PSG is cost-prohibitive when measuring treatment effects in some clinical trials, and children with Down syndrome may not cooperate with undergoing a PSG. Minimal information is available on the validity of alternative methods of assessing sleep in children with Down syndrome, such as actigraphy and parent ratings. Our study examined the concurrent and convergent validity of different measures of sleep, including PSG, actigraphy and parent reports of sleep among children with Down syndrome. A clinic (n = 27) and a community (n = 47) sample of children with Down syndrome were examined. In clinic, children with Down syndrome wore an actigraph watch during a routine PSG. In the community, children with Down syndrome wore an actigraph watch for a week at home at night as part of a larger study on sleep and behaviour. Their parent completed ratings of the child's sleep during that same week. Actigraph watches demonstrated convergent validity with PSG when measuring a child with Down syndrome's total amount of sleep time, total wake time after sleep onset and sleep period efficiency. In contrast, actigraph watches demonstrated poor correlations with parent reports of sleep, and with PSG when measuring the total time in bed and total wake episodes. Actigraphy, PSG and parent ratings of sleep demonstrated poor concurrent validity with clinical diagnosis of obstructive sleep apnoea. Our current data suggest that actigraph watches demonstrate convergent validity and are sensitive to measuring certain sleep constructs (duration, efficiency) in children with Down syndrome. However, parent reports, such as the Children's Sleep Habits Questionnaire, may be measuring other sleep constructs. These findings highlight the importance of selecting measures of sleep related to target concerns. © 2018 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  20. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  1. A simple mass-conserved level set method for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  2. Bioluminescence Tomography–Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin; Wang, Ken Kang-Hsin, E-mail: kwang27@jhmi.edu; Yu, Jingjing

    Purpose: In preclinical radiation research, it is challenging to localize soft tissue targets based on cone beam computed tomography (CBCT) guidance. As a more effective method to localize soft tissue targets, we developed an online bioluminescence tomography (BLT) system for small-animal radiation research platform (SARRP). We demonstrated BLT-guided radiation therapy and validated targeting accuracy based on a newly developed reconstruction algorithm. Methods and Materials: The BLT system was designed to dock with the SARRP for image acquisition and to be detached before radiation delivery. A 3-mirror system was devised to reflect the bioluminescence emitted from the subject to a stationarymore » charge-coupled device (CCD) camera. Multispectral BLT and the incomplete variables truncated conjugate gradient method with a permissible region shrinking strategy were used as the optimization scheme to reconstruct bioluminescent source distributions. To validate BLT targeting accuracy, a small cylindrical light source with high CBCT contrast was placed in a phantom and also in the abdomen of a mouse carcass. The center of mass (CoM) of the source was recovered from BLT and used to guide radiation delivery. The accuracy of the BLT-guided targeting was validated with films and compared with the CBCT-guided delivery. In vivo experiments were conducted to demonstrate BLT localization capability for various source geometries. Results: Online BLT was able to recover the CoM of the embedded light source with an average accuracy of 1 mm compared to that with CBCT localization. Differences between BLT- and CBCT-guided irradiation shown on the films were consistent with the source localization revealed in the BLT and CBCT images. In vivo results demonstrated that our BLT system could potentially be applied for multiple targets and tumors. Conclusions: The online BLT/CBCT/SARRP system provides an effective solution for soft tissue targeting, particularly for small, nonpalpable, or orthotopic tumor models.« less

  3. Development and validation of TOF-SIMS and CLSM imaging method for cytotoxicity study of ZnO nanoparticles in HaCaT cells.

    PubMed

    Lee, Pei-Ling; Chen, Bo-Chia; Gollavelli, Ganesh; Shen, Sin-Yu; Yin, Yu-Sheng; Lei, Shiu-Ling; Jhang, Cian-Ling; Lee, Woan-Ruoh; Ling, Yong-Chien

    2014-07-30

    Zinc oxide nanoparticles (ZnO NPs) exhibit novel physiochemical properties and have found increasing use in sunscreen products and cosmetics. The potential toxicity is of increasing concern due to their close association with human skin. A time-of-flight secondary ion mass spectrometry (TOF-SIMS) and confocal laser scanning microscopy (CLSM) imaging method was developed and validated for rapid and sensitive cytotoxicity study of ZnO NPs using human skin equivalent HaCaT cells as a model system. Assorted material, chemical, and toxicological analysis methods were used to confirm their shape, size, crystalline structure, and aggregation properties as well as dissolution behavior and effect on HaCaT cell viability in the presence of various concentrations of ZnO NPs in aqueous media. Comparative and correlative analyses of aforementioned results with TOF-SIMS and CLSM imaging results exhibit reasonable and acceptable outcome. A marked drop in survival rate was observed with 50μg/ml ZnO NPs. The CLSM images reveal the absorption and localization of ZnO NPs in cytoplasm and nuclei. The TOF-SIMS images demonstrate elevated levels of intracellular ZnO concentration and associated Zn concentration-dependent (40)Ca/(39)K ratio, presumably caused by the dissolution behavior of ZnO NPs. Additional validation by using stable isotope-labeled (68)ZnO NPs as tracers under the same experimental conditions yields similar cytotoxicity effect. The imaging results demonstrate spatially-resolved cytotoxicity relationship between intracellular ZnO NPs, (40)Ca/(39)K ratio, phosphocholine fragments, and glutathione fragments. The trend of change in TOF-SIMS spectra and images of ZnO NPs treated HaCaT cells demonstrate the possible mode of actions by ZnO NP involves cell membrane disruption, cytotoxic response, and ROS mediated apoptosis. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Dose mapping: validation in 4D dosimetry with measurements and application in radiotherapy follow-up evaluation.

    PubMed

    Zhang, Geoffrey G; Huang, Tzung-Chi; Forster, Ken M; Lin, Kang-Ping; Stevens, Craig; Harris, Eleanor; Guerrero, Thomas

    2008-04-01

    The purpose of this paper is to validate a dose mapping program using optical flow method (OFM), and to demonstrate application of the program in radiotherapy follow-up evaluation. For the purpose of validation, the deformation matrices between four-dimensional (4D) CT data of different simulated respiration phases of a phantom were calculated using OFM. The matrices were then used to map doses of all phases to a single-phase image, and summed in equal time weighting. The calculated dose should closely represent the dose delivered to the moving phantom if the deformation matrices are accurately calculated. The measured point doses agreed with the OFM calculations better than 2% at isocenters, and dose distributions better than 1mm for the 50% isodose line. To demonstrate proof-of-concept for the use of deformable image registration in dose mapping for treatment evaluation, the treatment-planning CT was registered with the post-treatment CT image from the positron emission tomography (PET)/CT resulting in a deformation matrix. The dose distribution from the treatment plan was then mapped onto the restaging PET/CT using the deformation matrix. Two cases in which patients had thoracic malignancies are presented. Each patient had CT-based treatment planning for radiotherapy and restaging fluorodeoxy glucose (FDG)-PET/CT imaging 4-6 weeks after completion of treatments. Areas of pneumonitis and recurrence were identified radiographically on both PET and CT restaging images. Local dose and standard uptake values for pneumonitis and recurrence were studied as a demonstration of this method. By comparing the deformable mapped dose to measurement, the treatment evaluation method which is introduced in this manuscript proved to be accurate. It thus provides a more accurate analysis than other rigid or linear dose-image registration when used in studying treatment outcome versus dose.

  5. Developing a Validity Argument through Abductive Reasoning with an Empirical Demonstration of the Latent Class Analysis

    ERIC Educational Resources Information Center

    Wu, Amery D.; Stone, Jake E.; Liu, Yan

    2016-01-01

    This article proposes and demonstrates a methodology for test score validation through abductive reasoning. It describes how abductive reasoning can be utilized in support of the claims made about test score validity. This methodology is demonstrated with a real data example of the Canadian English Language Proficiency Index Program…

  6. The time has come for new models in febrile neutropenia: a practical demonstration of the inadequacy of the MASCC score.

    PubMed

    Carmona-Bayonas, A; Jiménez-Fonseca, P; Virizuela Echaburu, J; Sánchez Cánovas, M; Ayala de la Peña, F

    2017-09-01

    Since its publication more than 15 years ago, the MASCC score has been internationally validated any number of times and recommended by most clinical practice guidelines for the management of febrile neutropenia (FN) around the world. We have used an empirical data-supported simulated scenario to demonstrate that, despite everything, the MASCC score is impractical as a basis for decision-making. A detailed analysis of reasons supporting the clinical irrelevance of this model is performed. First, seven of its eight variables are "innocent bystanders" that contribute little to selecting low-risk candidates for ambulatory management. Secondly, the training series was hardly representative of outpatients with solid tumors and low-risk FN. Finally, the simultaneous inclusion of key variables both in the model and in the outcome explains its successful validation in various series of patients. Alternative methods of prognostic classification, such as the Clinical Index of Stable Febrile Neutropenia, have been specifically validated for patients with solid tumors and should replace the MASCC model in situations of clinical uncertainty.

  7. Long-term predictions using natural analogues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, R.C.

    1995-09-01

    One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10{sup 3}-10{sup 5} years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., {open_quotes}natural analogues{close_quotes}) provide perhaps the only means of partial {open_quotes}validation,{close_quotes} as well as data that may be used directlymore » in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10{sup 3}-10{sup 8} years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the {open_quotes}validation{close_quotes} of performance assessments.« less

  8. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  9. Determination of free sulphydryl groups in wheat gluten under the influence of different time and temperature of incubation: method validation.

    PubMed

    Rakita, Slađana; Pojić, Milica; Tomić, Jelena; Torbica, Aleksandra

    2014-05-01

    The aim of the present study was to determine the characteristics of an analytical method for determination of free sulphydryl (SH) groups of wheat gluten performed with previous gluten incubation for variable times (45, 90 and 135min) at variable temperatures (30 and 37°C), in order to determine its fitness-for-purpose. It was observed that the increase in temperature and gluten incubation time caused the increase in the amount of free SH groups, with more dynamic changes at 37°C. The method characteristics identified as relevant were: linearity, limit of detection, limit of quantification, precision (repeatability and reproducibility) and measurement uncertainty, which were checked within the validation protocol, while the method performance was monitored by X- and R-control charts. Identified method characteristics demonstrated its acceptable fitness-for-purpose, when assay included previous gluten incubation at 30°C. Although the method repeatability at 37°C was acceptable, the corresponding reproducibility did not meet the performance criterion on the basis of HORRAT value (HORRAT<2). Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Fractal propagation method enables realistic optical microscopy simulations in biological tissues

    PubMed Central

    Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.

    2017-01-01

    Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499

  11. A computational framework for automation of point defect calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  12. Photogrammetric Technique for Center of Gravity Determination

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Johnson, Thomas H.; Shemwell, Dave; Shreves, Christopher M.

    2012-01-01

    A new measurement technique for determination of the center of gravity (CG) for large scale objects has been demonstrated. The experimental method was conducted as part of an LS-DYNA model validation program for the Max Launch Abort System (MLAS) crew module. The test was conducted on the full scale crew module concept at NASA Langley Research Center. Multi-camera photogrammetry was used to measure the test article in several asymmetric configurations. The objective of these measurements was to provide validation of the CG as computed from the original mechanical design. The methodology, measurement technique, and measurement results are presented.

  13. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  14. A computational framework for automation of point defect calculations

    DOE PAGES

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...

    2017-01-13

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  15. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Yang, Henry T. Y.; Batina, John T.

    1992-01-01

    Quality assessment procedures are described for two-dimensional and three-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate the accuracy of an implicit upwind Euler solution algorithm.

  16. Real-time motion artifacts compensation of ToF sensors data on GPU

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas

    2013-05-01

    Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.

  17. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    PubMed

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  18. Translation, Cultural Adaptation and Validation of the Simple Shoulder Test to Spanish

    PubMed Central

    Arcuri, Francisco; Barclay, Fernando; Nacul, Ivan

    2015-01-01

    Background: The validation of widely used scales facilitates the comparison across international patient samples. Objective: The objective was to translate, culturally adapt and validate the Simple Shoulder Test into Argentinian Spanish. Methods: The Simple Shoulder Test was translated from English into Argentinian Spanish by two independent translators, translated back into English and evaluated for accuracy by an expert committee to correct the possible discrepancies. It was then administered to 50 patients with different shoulder conditions.Psycometric properties were analyzed including internal consistency, measured with Cronbach´s Alpha, test-retest reliability at 15 days with the interclass correlation coefficient. Results: The internal consistency, validation, was an Alpha of 0,808, evaluated as good. The test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.835, evaluated as excellent. Conclusion: The Simple Shoulder Test translation and it´s cultural adaptation to Argentinian-Spanish demonstrated adequate internal reliability and validity, ultimately allowing for its use in the comparison with international patient samples.

  19. Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka.

    PubMed

    de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon

    2016-01-01

    Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach's α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. © 2015 APJPH.

  20. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  1. Reliability and validity of the Korean version of the Short Musculoskeletal Function Assessment questionnaire for patients with musculoskeletal disorder.

    PubMed

    Jung, Kyoung-Sim; Jung, Jin-Hwa; In, Tae-Sung; Cho, Hwi-Young

    2016-09-01

    [Purpose] The purpose of this study was to establish the reliability and validity of the Short Musculoskeletal Function Assessment questionnaire, which was translated into Korean, for patients with musculoskeletal disorder. [Subjects and Methods] Fifty-five subjects (26 males and 29 females) with musculoskeletal diseases participated in the study. The Short Musculoskeletal Function Assessment questionnaire focuses on a limited range of physical functions and includes a dysfunction index and a bother index. Reliability was determined using the intraclass correlation coefficient, and validity was examined by correlating short musculoskeletal function assessment scores with the 36-item Short-Form Health Survey (SF-36) score. [Results] The reliability was 0.97 for the dysfunction index and 0.94 for the bother index. Validity was established by comparison with Korean version of the SF-36. [Conclusion] This study demonstrated that the Korean version of the Short Musculoskeletal Function Assessment questionnaire is a reliable and valid instrument for the assessment of musculoskeletal disorders.

  2. Validation of a French adaptation of the Harvard Trauma Questionnaire among torture survivors from sub-Saharan African countries

    PubMed Central

    de Fouchier, Capucine; Blanchet, Alain; Hopkins, William; Bui, Eric; Ait-Aoudia, Malik; Jehel, Louis

    2012-01-01

    Background To date no validated instrument in the French language exists to screen for posttraumatic stress disorder (PTSD) in survivors of torture and organized violence. Objective The aim of this study is to adapt and validate the Harvard Trauma Questionnaire (HTQ) to this population. Method The adapted version was administered to 52 French-speaking torture survivors, originally from sub-Saharan African countries, receiving psychological treatment in specialized treatment centers. A structured clinical interview for DSM was also conducted in order to assess if they met criteria for PTSD. Results Cronbach's alpha coefficient for the HTQ Part 4 was adequate (0.95). Criterion validity was evaluated using receiver operating characteristic curve analysis that generated good classification accuracy for PTSD (0.83). At the original cut-off score of 2.5, the HTQ demonstrated high sensitivity and specificity (0.87 and 0.73, respectively). Conclusion Results support the reliability and validity of the French version of the HTQ. PMID:23233870

  3. Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka

    PubMed Central

    de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon

    2016-01-01

    Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach’s α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. PMID:26712893

  4. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  5. Validation of Diagnostic Groups Based on Health Care Utilization Data Should Adjust for Sampling Strategy.

    PubMed

    Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini

    2017-08-01

    Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.

  6. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  7. False Positive Probabilities for all Kepler Objects of Interest: 1284 Newly Validated Planets and 428 Likely False Positives

    NASA Astrophysics Data System (ADS)

    Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.

    2016-05-01

    We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.

  8. On the validity of the use of a localized approximation for helical beams. I. Formal aspects

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; André Ambrosio, Leonardo

    2018-03-01

    The description of an electromagnetic beam for use in light scattering theories may be carried out by using an expansion over vector spherical wave functions with expansion coefficients expressed in terms of Beam Shape Coefficients (BSCs). A celebrated method to evaluate these BSCs has been the use of localized approximations (with several existing variants). We recently established that the use of any existing localized approximation is of limited validity in the case of Bessel and Mathieu beams. In the present paper, we address a warning against the use of any existing localized approximation in the case of helical beams. More specifically, we demonstrate that a procedure used to validate any existing localized approximation fails in the case of helical beams. Numerical computations in a companion paper will confirm that existing localized approximations are of limited validity in the case of helical beams.

  9. Measurement properties of existing clinical assessment methods evaluating scapular positioning and function. A systematic review.

    PubMed

    Larsen, Camilla Marie; Juul-Kristensen, Birgit; Lund, Hans; Søgaard, Karen

    2014-10-01

    The aims were to compile a schematic overview of clinical scapular assessment methods and critically appraise the methodological quality of the involved studies. A systematic, computer-assisted literature search using Medline, CINAHL, SportDiscus and EMBASE was performed from inception to October 2013. Reference lists in articles were also screened for publications. From 50 articles, 54 method names were identified and categorized into three groups: (1) Static positioning assessment (n = 19); (2) Semi-dynamic (n = 13); and (3) Dynamic functional assessment (n = 22). Fifteen studies were excluded for evaluation due to no/few clinimetric results, leaving 35 studies for evaluation. Graded according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN checklist), the methodological quality in the reliability and validity domains was "fair" (57%) to "poor" (43%), with only one study rated as "good". The reliability domain was most often investigated. Few of the assessment methods in the included studies that had "fair" or "good" measurement property ratings demonstrated acceptable results for both reliability and validity. We found a substantially larger number of clinical scapular assessment methods than previously reported. Using the COSMIN checklist the methodological quality of the included measurement properties in the reliability and validity domains were in general "fair" to "poor". None were examined for all three domains: (1) reliability; (2) validity; and (3) responsiveness. Observational evaluation systems and assessment of scapular upward rotation seem suitably evidence-based for clinical use. Future studies should test and improve the clinimetric properties, and especially diagnostic accuracy and responsiveness, to increase utility for clinical practice.

  10. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator

    PubMed Central

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane

    2018-01-01

    Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635

  11. A novel multi-target regression framework for time-series prediction of drug efficacy.

    PubMed

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-18

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.

  12. A novel multi-target regression framework for time-series prediction of drug efficacy

    PubMed Central

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-01

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186

  13. Autonomous Landmark Calibration Method for Indoor Localization

    PubMed Central

    Kim, Jae-Hoon; Kim, Byoung-Seop

    2017-01-01

    Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method. PMID:28837071

  14. Rapid and sensitive method for determination of withaferin-A in human plasma by HPLC.

    PubMed

    Patial, Pankaj; Gota, Vikram

    2011-02-01

    To develop and validate a rapid and sensitive high-performance liquid chromatographic method for determination of withaferin-A in human plasma. Withaferin-A, the active molecule of a traditional Indian herb, has demonstrated several biological activities in preclinical models. A validated bioassay is not available for its pharmacokinetic evaluation. The chromatographic system used a reverse-phase C18 column with UV-visible detection at 225 nm. The mobile phase consisted of water and acetonitrile applied in a gradient flow. Withaferin-A was extracted by simple protein-precipitation technique. The calibration curve was linear in the concentration range of 0.05-1.6 µg/ml. The method has the desired sensitivity to detect the plasma concentration range of withaferin-A that is likely to show biological activity based on in vitro data. This is the first HPLC method ever described for the estimation of withaferin-A in human plasma which could be applied for pharmacokinetic studies.

  15. Reversed phase HPLC for strontium ranelate: Method development and validation applying experimental design.

    PubMed

    Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád

    2018-06-01

    A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.

  16. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  17. The use of virtual reality and physical tools in the development and validation of ease of entry and exit in passenger vehicles.

    PubMed

    Lawson, Glyn; Herriotts, Paul; Malcolm, Louise; Gabrecht, Katharina; Hermawati, Setia

    2015-05-01

    Ease of entry and exit is important for creating a positive first impression of a car and increasing customer satisfaction. Several methods are used within vehicle development to optimise ease of entry and exit, including CAD reviews, benchmarking and buck trials. However, there is an industry trend towards digital methods to reduce the costs and time associated with developing physical prototypes. This paper reports on a study of entry strategy in three properties (buck, car, CAVE) in which inconsistencies were demonstrated by people entering a vehicle representation in the CAVE. In a second study industry practitioners rated the CAVE as worse than physical methods for identifying entry and exit issues, and having lower perceived validity and reliability. However, the resource issues associated with building bucks were recognised. Recommendations are made for developing the CAVE and for combinations of methods for use at different stages of a vehicle's development. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  19. Compressed domain ECG biometric with two-lead features

    NASA Astrophysics Data System (ADS)

    Lee, Wan-Jou; Chang, Wen-Whei

    2016-07-01

    This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.

  20. Analytical properties of some commercially available nitrate reductase enzymes evaluated as replacements for cadmium in automated, semiautomated, and manual colorimetric methods for determination of nitrate plus nitrite in water

    USGS Publications Warehouse

    Patton, Charles J.; Kryskalla, Jennifer R.

    2013-01-01

    A multiyear research effort at the U.S. Geological Survey (USGS) National Water Quality Laboratory (NWQL) evaluated several commercially available nitrate reductase (NaR) enzymes as replacements for toxic cadmium in longstanding automated colorimetric air-segmented continuous-flow analyzer (CFA) methods for determining nitrate plus nitrite (NOx) in water. This research culminated in USGS approved standard- and low-level enzymatic reduction, colorimetric automated discrete analyzer NOx methods that have been in routine operation at the NWQL since October 2011. The enzyme used in these methods (AtNaR2) is a product of recombinant expression of NaR from Arabidopsis thaliana (L.) Heynh. (mouseear cress) in the yeast Pichia pastoris. Because the scope of the validation report for these new automated discrete analyzer methods, published as U.S. Geological Survey Techniques and Methods 5–B8, was limited to performance benchmarks and operational details, extensive foundational research with different enzymes—primarily YNaR1, a product of recombinant expression of NaR from Pichia angusta in the yeast Pichia pastoris—remained unpublished until now. This report documents research and development at the NWQL that was foundational to development and validation of the discrete analyzer methods. It includes: (1) details of instrumentation used to acquire kinetics data for several NaR enzymes in the presence and absence of known or suspected inhibitors in relation to reaction temperature and reaction pH; and (2) validation results—method detection limits, precision and bias estimates, spike recoveries, and interference studies—for standard- and low-level automated colorimetric CFA-YNaR1 reduction NOx methods in relation to corresponding USGS approved CFA cadmium-reduction (CdR) NOx methods. The cornerstone of this validation is paired sample statistical and graphical analysis of NOx concentrations from more than 3,800 geographically and seasonally diverse surface-water and groundwater samples that were analyzed in parallel by CFA-CdR and CFA enzyme-reduction methods. Finally, (3) demonstration of a semiautomated batch procedure in which 2-milliliter analyzer cups or disposable spectrophotometer cuvettes serve as reaction vessels for enzymatic reduction of nitrate to nitrite prior to analytical determinations. After the reduction step, analyzer cups are loaded onto CFA, flow injection, or discrete analyzers for simple, rapid, automatic nitrite determinations. In the case of manual determinations, analysts dispense colorimetric reagents into cuvettes containing post-reduction samples, allow time for color to develop, insert cuvettes individually into a spectrophotometer, and record percent transmittance or absorbance in relation to a reagent blank. Data presented here demonstrate equivalent analytical performance of enzymatic reduction NOx methods in these various formats to that of benchmark CFA-CdR NOx methods.

  1. Numerical method of carbon-based material ablation effects on aero-heating for half-sphere

    NASA Astrophysics Data System (ADS)

    Wang, Jiang-Feng; Li, Jia-Wei; Zhao, Fa-Ming; Fan, Xiao-Feng

    2018-05-01

    A numerical method of aerodynamic heating with material thermal ablation effects for hypersonic half-sphere is presented. A surface material ablation model is provided to analyze the ablation effects on aero-thermal properties and structural heat conduction for thermal protection system (TPS) of hypersonic vehicles. To demonstrate its capability, applications for thermal analysis of hypersonic vehicles using carbonaceous ceramic ablators are performed and discussed. The numerical results show the high efficiency and validation of the method developed in thermal characteristics analysis of hypersonic aerodynamic heating.

  2. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  3. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  4. Simultaneous determination of ten Aconitum alkaloids in rat tissues by UHPLC-MS/MS and its application to a tissue distribution study on the compatibility of Heishunpian and Fritillariae thunbergii Bulbus.

    PubMed

    Yang, Bin; Xu, Yanyan; Wu, Yuanyuan; Wu, Huanyu; Wang, Yuan; Yuan, Lei; Xie, Jiabin; Li, Yubo; Zhang, Yanjun

    2016-10-15

    A rapid, sensitive and selective ultra-high performance liquid chromatography with tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for simultaneous determination of ten Aconitum alkaloids in rat tissues. The tissue samples were prepared by a simple procedure protein precipitation with acetonitrile containing 0.1% acetic acid and separated on an Agilent XDB C18 column (4.6 mm×50mm, 1.8μm) using gradient elution with a mobile phase consisting of water and acetonitrile (both containing 0.1% formic acid) at a flow rate of 0.3mL/min. The quantitive determination was performed on an electrospray ionization (ESI) triple quadrupole tandem mass spectrometer using selective reaction monitoring (SRM) under positive ionization mode. The established method was fully validated according to the USA Food and Drug Administration (FDA) bioanalytical method validation guidance and the results demonstrated that the method was sensitive and selective with the lowest limits of quantification (LLOQ) at 0.025ng/mL in rat tissue homogenates. Meanwhile, the linearity, precision, accuracy, extraction recovery, matrix effect and stability were all within the required limits of biological sample analysis. After method validation, the validated method was successfully applied to the tissue distribution study on the compatibility of Heishunpian (HSP, the processed product of Aconitum carmichaelii Debx) and Fritillariae thunbergii Bulbus (Zhebeimu, ZBM). The results indicated that the distribution feature of monoester diterpenoid aconitines (MDAs), diester diterpenoid aconitines (DDAs) and non-ester alkaloids (NEAs) were inconsistency, and the compatibility of HSP and ZBM resulted in the distribution amount of DDAs increased in tissues. What's more, the results could provide the reliable basis for systematic research on the substance foundation of the compatibility of the herbal pair. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  6. Exact solutions for STO and (3+1)-dimensional KdV-ZK equations using (G‧/G2) -expansion method

    NASA Astrophysics Data System (ADS)

    Bibi, Sadaf; Mohyud-Din, Syed Tauseef; Ullah, Rahmat; Ahmed, Naveed; Khan, Umar

    This article deals with finding some exact solutions of nonlinear fractional differential equations (NLFDEs) by applying a relatively new method known as (G‧/G2) -expansion method. Solutions of space-time fractional Sharma-Tasso-Olever (STO) equation of fractional order and (3+1)-dimensional KdV-Zakharov Kuznetsov (KdV-ZK) equation of fractional order are reckoned to demonstrate the validity of this method. The fractional derivative version of modified Riemann-Liouville, linked with Fractional complex transform is employed to transform fractional differential equations into the corresponding ordinary differential equations.

  7. A simplified method for extracting androgens from avian egg yolks

    USGS Publications Warehouse

    Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.

    2009-01-01

    Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.

  8. Validation of a Five Plate Test, the STAR protocol, for the screening of antibiotic residues in muscle from different animal species according to European Decision 2002/657/EC.

    PubMed

    Gaudin, V; Hedou, C; Rault, A; Verdon, E

    2010-07-01

    The STAR protocol is a Five Plate Test (FPT) developed several years ago at the Community Reference Laboratory (CRL) for the screening of antimicrobial residues in milk and muscle. This paper presents the validation of this method according to European Decision 2002/657/EC and to an internal guideline for validation. A validation protocol based on 'simulated tissues' and on a list of 16 representative antimicrobials to be validated was implemented in our laboratory during several months for the STAR protocol. The performance characteristics of the method were determined (specificity, detection capabilities CCbeta, applicability, ruggedness). In conclusion, the STAR protocol is applicable to the broad-spectrum detection of antibiotic residues in muscles of different animal species (pig, cattle, sheep, poultry). The method has good specificity (false-positive rate = 4%). The detection capabilities were determined for 16 antibiotics from different families in relation to their respective maximum residue limit (MRL): beta-lactams (penicillins and cephalosporins < or = MRL), tetracyclines (< or = MRL and < or = 2.5 MRL), macrolides (2 MRL), quinolones (< or = 2 MRL), some sulphonamides (< or = 3 MRL), and trimethoprim (2 MRL). However, the sensitivity of the STAR protocol towards aminoglycosides (> 8 MRL) and florfenicol (< or = 10 MRL) was unsatisfactory (>MRL). The two objectives of this study were met: firstly, to validate the STAR protocol according to European Decision 2002/657/EC, then to demonstrate that the validation guideline developed to implement this decision is applicable to microbiological plate tests even for muscle. The use of simulated tissue appeared a good compromise between spiked discs with antibiotic solutions and incurred tissues. In addition, the choice of a list of representative antibiotics allowed the reduction of the scope of the validation, which was already costly in time and effort.

  9. Validation of the content of the prevention protocol for early sepsis caused by Streptococcus agalactiaein newborns

    PubMed Central

    da Silva, Fabiana Alves; Vidal, Cláudia Fernanda de Lacerda; de Araújo, Ednaldo Cavalcante

    2015-01-01

    Abstract Objective: to validate the content of the prevention protocol for early sepsis caused by Streptococcus agalactiaein newborns. Method: a transversal, descriptive and methodological study, with a quantitative approach. The sample was composed of 15 judges, 8 obstetricians and 7 pediatricians. The validation occurred through the assessment of the content of the protocol by the judges that received the instrument for data collection - checklist - which contained 7 items that represent the requisites to be met by the protocol. The validation of the content was achieved by applying the Content Validity Index. Result: in the judging process, all the items that represented requirements considered by the protocol obtained concordance within the established level (Content Validity Index > 0.75). Of 7 items, 6 have obtained full concordance (Content Validity Index 1.0) and the feasibility item obtained a Content Validity Index of 0.93. The global assessment of the instruments obtained a Content Validity Index of 0.99. Conclusion: the validation of content that was done was an efficient tool for the adjustment of the protocol, according to the judgment of experienced professionals, which demonstrates the importance of conducting a previous validation of the instruments. It is expected that this study will serve as an incentive for the adoption of universal tracking by other institutions through validated protocols. PMID:26444165

  10. Adolescent Substance Treatment Engagement Questionnaire for Incarcerated Teens

    PubMed Central

    Martin, Rosemarie A.; Stein, Lynda A.R.; Clair, Mary; Cancilliere, Mary Kathryn; Hurlbut, Warren; Rohsenow, Damaris J.

    2016-01-01

    Background Treatment engagement is often measured in terms of treatment retention and drop out, resource utilization, and missed appointments. Since persons may regularly attend treatment sessions but not pay close attention, actively participate, or comply with the program, attendance may not reflect the level of effort put into treatment. Teens in correctional settings may feel coerced to attend treatment, making it necessary to develop measures of treatment involvement beyond attendance. This study describes the development and validation of the Adolescent Substance Treatment Engagement Questionnaire (ASTEQ), Teen and Counselor versions. Methods The psychometric properties of the ASTEQ were examined in a sample of incarcerated teens (N = 205) and their counselors. Principal component analysis was conducted on teen and counselor versions of the questionnaire. Results Scales of positive and negative treatment engagement were found, reflecting both overt behaviors (joking around, talking to others) and attitudes (interest in change). Significant correlations with constructs related to treatment attitudes and behaviors, and misbehaviors (including substance use) demonstrate good concurrent and predictive validity. Teen and counselor ratings of engagement produced validity correlations in the medium effect size range. Conclusions These measures comprise a valid and reliable method for measuring treatment engagement for incarcerated teens. PMID:26021405

  11. Distinguishing Vaccinium species by chemical fingerprinting based on NMR spectra, validated with spectra collected in different laboratories.

    PubMed

    Markus, Michelle A; Ferrier, Jonathan; Luchsinger, Sarah M; Yuk, Jimmy; Cuerrier, Alain; Balick, Michael J; Hicks, Joshua M; Killday, K Brian; Kirby, Christopher W; Berrue, Fabrice; Kerr, Russell G; Knagge, Kevin; Gödecke, Tanja; Ramirez, Benjamin E; Lankin, David C; Pauli, Guido F; Burton, Ian; Karakach, Tobias K; Arnason, John T; Colson, Kimberly L

    2014-06-01

    A method was developed to distinguish Vaccinium species based on leaf extracts using nuclear magnetic resonance spectroscopy. Reference spectra were measured on leaf extracts from several species, including lowbush blueberry (Vaccinium angustifolium), oval leaf huckleberry (Vaccinium ovalifolium), and cranberry (Vaccinium macrocarpon). Using principal component analysis, these leaf extracts were resolved in the scores plot. Analysis of variance statistical tests demonstrated that the three groups differ significantly on PC2, establishing that the three species can be distinguished by nuclear magnetic resonance. Soft independent modeling of class analogies models for each species also showed discrimination between species. To demonstrate the robustness of nuclear magnetic resonance spectroscopy for botanical identification, spectra of a sample of lowbush blueberry leaf extract were measured at five different sites, with different field strengths (600 versus 700 MHz), different probe types (cryogenic versus room temperature probes), different sample diameters (1.7 mm versus 5 mm), and different consoles (Avance I versus Avance III). Each laboratory independently demonstrated the linearity of their NMR measurements by acquiring a standard curve for chlorogenic acid (R(2) = 0.9782 to 0.9998). Spectra acquired on different spectrometers at different sites classifed into the expected group for the Vaccinium spp., confirming the utility of the method to distinguish Vaccinium species and demonstrating nuclear magnetic resonance fingerprinting for material validation of a natural health product. Georg Thieme Verlag KG Stuttgart · New York.

  12. Development and Validation of the Survey of Organizational Research Climate (SORC)

    PubMed Central

    Martinson, Brian C.; Thrush, Carol R.; Crain, A. Lauren

    2012-01-01

    Background Development and targeting efforts by academic organizations to effectively promote research integrity can be enhanced if they are able to collect reliable data to benchmark baseline conditions, to assess areas needing improvement, and to subsequently assess the impact of specific initiatives. To date, no standardized and validated tool has existed to serve this need. Methods A web- and mail-based survey was administered in the second half of 2009 to 2,837 randomly selected biomedical and social science faculty and postdoctoral fellows at 40 academic health centers in top-tier research universities in the United States. Measures included the Survey of Organizational Research Climate (SORC) as well as measures of perceptions of organizational justice. Results Exploratory and confirmatory factor analyses yielded seven subscales of organizational research climate, all of which demonstrated acceptable internal consistency (Cronbach’s α ranging from 0.81 to 0.87) and adequate test-retest reliability (Pearson r ranging from 0.72 to 0.83). A broad range of correlations between the seven subscales and five measures of organizational justice (unadjusted regression coefficients ranging from .13 to .95) document both construct and discriminant validity of the instrument. Conclusions The SORC demonstrates good internal (alpha) and external reliability (test-retest) as well as both construct and discriminant validity. PMID:23096775

  13. Internal validation of two new retrotransposons-based kits (InnoQuant® HY and InnoTyper® 21) at a forensic lab.

    PubMed

    Martins, Cátia; Ferreira, Paulo Miguel; Carvalho, Raquel; Costa, Sandra Cristina; Farinha, Carlos; Azevedo, Luísa; Amorim, António; Oliveira, Manuela

    2018-02-01

    Obtaining a genetic profile from pieces of evidence collected at a crime scene is the primary objective of forensic laboratories. New procedures, methods, kits, software or equipment must be carefully evaluated and validated before its implementation. The constant development of new methodologies for DNA testing leads to a steady process of validation, which consists of demonstrating that the technology is robust, reproducible, and reliable throughout a defined range of conditions. The present work aims to internally validate two new retrotransposon-based kits (InnoQuant ® HY and InnoTyper ® 21), under the working conditions of the Laboratório de Polícia Científica da Polícia Judiciária (LPC-PJ). For the internal validation of InnoQuant ® HY and InnoTyper ® 21 sensitivity, repeatability, reproducibility, and mixture tests and a concordance study between these new kits and those currently in use at LPC-PJ (Quantifiler ® Duo and GlobalFiler™) were performed. The results obtained for sensitivity, repeatability, and reproducibility tests demonstrated that both InnoQuant ® HY and InnoTyper ® 21 are robust, reproducible, and reliable. The results of the concordance studies demonstrate that InnoQuant ® HY produced quantification results in nearly 29% more than Quantifiler ® Duo (indicating that this new kit is more effective in challenging samples), while the differences observed between InnoTyper ® 21 and GlobalFiler™ are not significant. Therefore, the utility of InnoTyper ® 21 has been proven, especially by the successful amplification of a greater number of complete genetic profiles (27 vs. 21). The results herein presented allowed the internal validation of both InnoQuant ® HY and InnoTyper ® 21, and their implementation in the LPC-PJ laboratory routine for the treatment of challenging samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  15. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  16. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  17. Development and validation of a food frequency questionnaire for dietary intake assessment among multi-ethnic primary school-aged children

    PubMed Central

    Fatihah, Fadil; Ng, Boon Koon; Hazwanie, Husin; Norimah, A Karim; Shanita, Safii Nik; Ruzita, Abd Talib; Poh, Bee Koon

    2015-01-01

    INTRODUCTION This study aimed to develop and validate a food frequency questionnaire (FFQ) to assess habitual diets of multi-ethnic Malaysian children aged 7–12 years. METHODS A total of 236 primary school children participated in the development of the FFQ and 209 subjects participated in the validation study, with a subsample of 30 subjects participating in the reproducibility study. The FFQ, consisting of 94 food items from 12 food groups, was compared with a three-day dietary record (3DR) as the reference method. The reproducibility of the FFQ was assessed through repeat administration (FFQ2), seven days after the first administration (FFQ1). RESULTS The results of the validation study demonstrated good acceptance of the FFQ. Mean intake of macronutrients in FFQ1 and 3DR correlated well, although the FFQ intake data tended to be higher. Cross-classification of nutrient intake between the two methods showed that < 7% of subjects were grossly misclassified. Moderate correlations noted between the two methods ranged from r = 0.310 (p < 0.001) for fat to r = 0.497 (p < 0.001) for energy. The reproducibility of the FFQ, as assessed by Cronbach’s alpha, ranged from 0.61 (protein) to 0.70 (energy, carbohydrates and fat). Spearman’s correlations between FFQ1 and FFQ2 ranged from rho = 0.333 (p = 0.072) for protein to rho = 0.479 (p < 0.01) for fat. CONCLUSION These findings indicate that the FFQ is valid and reliable for measuring the average intake of energy and macronutrients in a population of multi-ethnic children aged 7–12 years in Malaysia. PMID:26702165

  18. Archeointensity estimates of a tenth-century kiln: first application of the Tsunakawa-Shaw paleointensity method to archeological relics

    NASA Astrophysics Data System (ADS)

    Kitahara, Yu; Yamamoto, Yuhji; Ohno, Masao; Kuwahara, Yoshihiro; Kameda, Shuichi; Hatakeyama, Tadahiro

    2018-05-01

    Paleomagnetic information reconstructed from archeological materials can be utilized to estimate the archeological age of excavated relics, in addition to revealing the geomagnetic secular variation and core dynamics. The direction and intensity of the Earth's magnetic field (archeodirection and archeointensity) can be ascertained using different methods, many of which have been proposed over the past decade. Among the new experimental techniques for archeointensity estimates is the Tsunakawa-Shaw method. This study demonstrates the validity of the Tsunakawa-Shaw method to reconstruct archeointensity from samples of baked clay from archeological relics. The validity of the approach was tested by comparison with the IZZI-Thellier method. The intensity values obtained coincided at the standard deviation (1 σ) level. A total of 8 specimens for the Tsunakawa-Shaw method and 16 specimens for the IZZI-Thellier method, from 8 baked clay blocks, collected from the surface of the kiln were used in these experiments. Among them, 8 specimens (for the Tsunakawa-Shaw method) and 3 specimens (for the IZZI-Thellier method) passed a set of strict selection criteria used in the final evaluation of validity. Additionally, we performed rock magnetic experiments, mineral analysis, and paleodirection measurement to evaluate the suitability of the baked clay samples for paleointensity experiments and hence confirmed that the sample properties were ideal for performing paleointensity experiments. It is notable that the newly estimated archaomagnetic intensity values are lower than those in previous studies that used other paleointensity methods for the tenth century in Japan. [Figure not available: see fulltext.

  19. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    PubMed

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  20. Tissue distribution study of periplocin and its two metabolites in rats by a validated LC-MS/MS method.

    PubMed

    Liu, Huaming; Zhang, Dandan; Tang, Zhidan; Sun, Mengjie; Azietaku, John Teye; Ouyang, Huizi; Chang, Yanxu; Wang, Meng; He, Jun; Gao, Xiumei

    2018-05-29

    Periplocin is a cardiac glycoside and has been used widely in the clinic for its cardiotonic, anti-inflammatory and anti-tumor effects. Though it was taken frequently by oral administration in clinic, there were no reports demonstrated that periplocin could be detected in vivo after an oral administration of periplocin, so it is badly need of searching the characteristic of periplocin in vivo after an oral administration. In this study, a sensitive and reliable liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to identify and quantify periplocin and its two metabolites in rat tissue after a single dosage of perplocin at 50 mg/kg. The results demonstrated that periplocin and its two metabolites were detected in all of the selected tissues, periplocin could be reach peak concentration quickly after administration, while periplocymarin and periplogenin reached maximum concentration more than 4.83 h after administration. The tissue distribution of analytes tended to be mostly in the liver, and higher analytes concentrations were found in the heart, liver, spleen, lung, kidney, but a small amount of chemical constituents were distributed into the brain. The consequences obtained using this method might provide a meaningful insight for the clinical investigations and applications. This article is protected by copyright. All rights reserved.

  1. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena

    2013-06-01

    Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  2. The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions.

    PubMed

    Jacob, Robin; Somers, Marie-Andree; Zhu, Pei; Bloom, Howard

    2016-06-01

    In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case). © The Author(s) 2016.

  3. Face, content, and construct validity of human placenta as a haptic training tool in neurointerventional surgery.

    PubMed

    Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del

    2016-05-01

    OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.

  4. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  6. Validation of an association rule mining-based method to infer associations between medications and problems.

    PubMed

    Wright, A; McCoy, A; Henkin, S; Flaherty, M; Sittig, D

    2013-01-01

    In a prior study, we developed methods for automatically identifying associations between medications and problems using association rule mining on a large clinical data warehouse and validated these methods at a single site which used a self-developed electronic health record. To demonstrate the generalizability of these methods by validating them at an external site. We received data on medications and problems for 263,597 patients from the University of Texas Health Science Center at Houston Faculty Practice, an ambulatory practice that uses the Allscripts Enterprise commercial electronic health record product. We then conducted association rule mining to identify associated pairs of medications and problems and characterized these associations with five measures of interestingness: support, confidence, chi-square, interest and conviction and compared the top-ranked pairs to a gold standard. 25,088 medication-problem pairs were identified that exceeded our confidence and support thresholds. An analysis of the top 500 pairs according to each measure of interestingness showed a high degree of accuracy for highly-ranked pairs. The same technique was successfully employed at the University of Texas and accuracy was comparable to our previous results. Top associations included many medications that are highly specific for a particular problem as well as a large number of common, accurate medication-problem pairs that reflect practice patterns.

  7. Impact of pH on the stability and the cross-reactivity of ochratoxin A and citrinin.

    PubMed

    Bazin, Ingrid; Faucet-Marquis, Virginie; Monje, Marie-Carmen; El Khoury, Micheline; Marty, Jean-Louis; Pfohl-Leszkowicz, Annie

    2013-11-28

    Mycotoxins are secondary metabolites produced by several fungi contaminating crops. In several countries, the maximum permitted levels of mycotoxins are found in foodstuffs and feedstuffs. The common strategy of mycotoxin analysis involves extraction, clean-up and quantification by chromatography. In this paper, we analyzed the reasons of underestimation of ochratoxin A (OTA) content in wine, and overestimation of OTA in wheat, depending on the pH of the clean-up step and the simultaneous presence of citrinin (CIT). We demonstrated that the increase of pH by adding polyethylene glycol (PEG) to wine led to an underestimation of OTA by conversion of OTA into open ring ochratoxin A OP-OA. In comparing three methods of extraction and clean-up for the determination of OTA and CIT in wheat--(i) an inter-laboratory validated method for OTA in cereals using immunoaffinity column clean-up (IAC) and extraction by acetonitrile/water; (ii) a validated method using IAC and extraction with 1% bicarbonate Na; and (iii) an in-house validated method based on acid liquid/liquid extraction--we observed an overestimation of OTA after immunoaffinity clean-up when CIT is also present in the sample, whereas an underestimation was observed when OTA was alone. Under neutral and alkaline conditions, CIT was partially recognized by OTA antibodies.

  8. Evaluation of a validated food frequency questionnaire for self-defined vegans in the United States.

    PubMed

    Dyett, Patricia; Rajaram, Sujatha; Haddad, Ella H; Sabate, Joan

    2014-07-08

    This study aimed to develop and validate a de novo food frequency questionnaire for self-defined vegans in the United States. Diet histories from pilot samples of vegans and a modified 'Block Method' using seven selected nutrients of concern in vegan diet patterns, were employed to generate the questionnaire food list. Food frequency responses of 100 vegans from 19 different U.S. states were obtained via completed mailed questionnaires and compared to multiple telephone-conducted diet recall interviews. Computerized diet analyses were performed. Correlation coefficients, t-tests, rank, cross-tabulations, and probability tests were used to validate and compare intake estimates and dietary reference intake (DRI) assessment trends between the two methods. A 369-item vegan-specific questionnaire was developed with 252 listed food frequency items. Calorie-adjusted correlation coefficients ranged from r = 0.374 to 0.600 (p < 0.001) for all analyzed nutrients except calcium. Estimates, ranks, trends and higher-level participant percentile placements for Vitamin B12 were similar with both methods. Questionnaire intakes were higher than recalls for most other nutrients. Both methods demonstrated similar trends in DRI adequacy assessment (e.g., significantly inadequate vitamin D intake among vegans). This vegan-specific questionnaire can be a useful assessment tool for health screening initiatives in U.S. vegan communities.

  9. A Stability-Indicating HPLC-DAD Method for Determination of Stiripentol: Development, Validation, Kinetics, Structure Elucidation and Application to Commercial Dosage Form

    PubMed Central

    Darwish, Hany W.; Abdelhameed, Ali S.; Bakheit, Ahmed H.; Khalil, Nasr Y.; Al-Majed, Abdulrahman A.

    2014-01-01

    A rapid, simple, sensitive, and accurate isocratic reversed-phase stability-indicating high performance liquid chromatography method has been developed and validated for the determination of stiripentol and its degradation product in its bulk form and pharmaceutical dosage form. Chromatographic separation was achieved on a Symmetry C18 column and quantification was achieved using photodiode array detector (DAD). The method was validated in accordance with the ICH requirements showing specificity, linearity (r 2 = 0.9996, range of 1–25 μg/mL), precision (relative standard deviation lower than 2%), accuracy (mean recovery 100.08 ± 1.73), limits of detection and quantitation (LOD = 0.024 and LOQ = 0.081 μg/mL), and robustness. Stiripentol was subjected to various stress conditions and it has shown marked stability under alkaline hydrolytic stress conditions, thermal, oxidative, and photolytic conditions. Stiripentol degraded only under acidic conditions, forming a single degradation product which was well resolved from the pure drug with significantly different retention time values. This degradation product was characterized by 1H-NMR and 13C-NMR spectroscopy as well as ion trap mass spectrometry. The results demonstrated that the method would have a great value when applied in quality control and stability studies for stiripentol. PMID:25371844

  10. Impact of pH on the Stability and the Cross-Reactivity of Ochratoxin A and Citrinin

    PubMed Central

    Bazin, Ingrid; Faucet-Marquis, Virginie; Monje, Marie-Carmen; El Khoury, Micheline; Marty, Jean-Louis; Pfohl-Leszkowicz, Annie

    2013-01-01

    Mycotoxins are secondary metabolites produced by several fungi contaminating crops. In several countries, the maximum permitted levels of mycotoxins are found in foodstuffs and feedstuffs. The common strategy of mycotoxin analysis involves extraction, clean-up and quantification by chromatography. In this paper, we analyzed the reasons of underestimation of ochratoxin A (OTA) content in wine, and overestimation of OTA in wheat, depending on the pH of the clean-up step and the simultaneous presence of citrinin (CIT). We demonstrated that the increase of pH by adding polyethylene glycol (PEG) to wine led to an underestimation of OTA by conversion of OTA into open ring ochratoxin A OP-OA. In comparing three methods of extraction and clean-up for the determination of OTA and CIT in wheat—(i) an inter-laboratory validated method for OTA in cereals using immunoaffinity column clean-up (IAC) and extraction by acetonitrile/water; (ii) a validated method using IAC and extraction with 1% bicarbonate Na; and (iii) an in-house validated method based on acid liquid/liquid extraction—we observed an overestimation of OTA after immunoaffinity clean-up when CIT is also present in the sample, whereas an underestimation was observed when OTA was alone. Under neutral and alkaline conditions, CIT was partially recognized by OTA antibodies. PMID:24287570

  11. Validating internal controls for quantitative plant gene expression studies.

    PubMed

    Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H

    2004-08-18

    Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments.

  12. Development and validation of a reversed phase liquid chromatographic method for analysis of oxytetracycline and related impurities.

    PubMed

    Kahsay, Getu; Shraim, Fairouz; Villatte, Philippe; Rotger, Jacques; Cassus-Coussère, Céline; Van Schepdael, Ann; Hoogmartens, Jos; Adams, Erwin

    2013-03-05

    A simple, robust and fast high-performance liquid chromatographic method is described for the analysis of oxytetracycline and its related impurities. The principal peak and impurities are all baseline separated in 20 min using an Inertsil C₈ (150 mm × 4.6 mm, 5 μm) column kept at 50 °C. The mobile phase consists of a gradient mixture of mobile phases A (0.05% trifluoroacetic acid in water) and B (acetonitrile-methanol-tetrahydrofuran, 80:15:5, v/v/v) pumped at a flow rate of 1.3 ml/min. UV detection was performed at 254 nm. The developed method was validated for its robustness, sensitivity, precision and linearity in the range from limit of quantification (LOQ) to 120%. The limits of detection (LOD) and LOQ were found to be 0.08 μg/ml and 0.32 μg/ml, respectively. This method allows the separation of oxytetracycline from all known and 5 unknown impurities, which is better than previously reported in the literature. Moreover, the simple mobile phase composition devoid of non-volatile buffers made the method suitable to interface with mass spectrometry for further characterization of unknown impurities. The developed method has been applied for determination of related substances in oxytetracycline bulk samples available from four manufacturers. The validation results demonstrate that the method is reliable for quantification of oxytetracycline and its impurities. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Validation of a Rapid Bacteria Endospore Enumeration System for Planetary Protection Application

    NASA Astrophysics Data System (ADS)

    Chen, Fei; Kern, Roger; Kazarians, Gayane; Venkateswaran, Kasthuri

    NASA monitors spacecraft surfaces to assure that the presence of bacterial endospores meets strict criteria at launch, to minimize the risk of inadvertent contamination of the surface of Mars. Currently, the only approved method for enumerating the spores is a culture based assay that requires three days to produce results. In order to meet the demanding schedules of spacecraft assembly, a more rapid spore detection assay is being considered as an alternate method to the NASA standard culture-based assay. The Millipore Rapid Microbiology Detection System (RMDS) has been used successfully for rapid bioburden enumeration in the pharmaceutical and food industries. The RMDS is rapid and simple, shows high sensitivity (to 1 colony forming unit [CFU]/sample), and correlates well with traditional culture-based methods. It combines membrane filtration, adenosine triphosphate (ATP) bioluminescence chemistry, and image analysis based on photon detection with a Charge Coupled Device (CCD) camera. In this study, we have optimized the assay conditions and evaluated the use of the RMDS as a rapid spore detection tool for NASA applications. In order to select for spores, the samples were subjected to a heat shock step before proceeding with the RMDS incubation protocol. Seven species of Bacillus (nine strains) that have been repeatedly isolated from clean room environments were assayed. All strains were detected by the RMDS in 5 hours and these assay times were repeatedly demonstrated along with low image background noise. Validation experiments to compare the Rapid Sore Assay (RSA) and NASA standard assay (NSA) were also performed. The evaluation criteria were modeled after the FDA Guideline of Process Validation, and Analytical Test Methods. This body of research demonstrates that the Rapid Spore Assay (RSA) is quick, and of equivalent sensitivity to the NASA standard assay, potentially reducing the assay time for bacterial endospores from over 72 hours to less than 8 hours. Accordingly, JPL has produced a report recommending that NASA adopt the RSA method as a suitable alternative to the NASA standard assay.

  14. Finite element solution of lubrication problems

    NASA Technical Reports Server (NTRS)

    Reddi, M. M.

    1971-01-01

    A variational formulation of the transient lubrication problem is presented and the corresponding finite element equations derived for three and six point triangles, and, four and eight point quadrilaterals. Test solutions for a one dimensional slider bearing used in validating the computer program are given. Utility of the method is demonstrated by a solution of the shrouded step bearing.

  15. Making Meaningful Measurement in Survey Research: A Demonstration of the Utility of the Rasch Model. IR Applications. Volume 28

    ERIC Educational Resources Information Center

    Royal, Kenneth D.

    2010-01-01

    Quality measurement is essential in every form of research, including institutional research and assessment. This paper addresses the erroneous assumptions institutional researchers often make with regard to survey research and provides an alternative method to producing more valid and reliable measures. Rasch measurement models are discussed and…

  16. Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    Criterion-referenced testing is an important area in the theory and practice of educational measurement. This study demonstrated that even these tests must be closely examined for construct validity. The dimensionality of a dataset will be affected by the examinee's cognitive processes as well as by the nature of the content domain. The methods of…

  17. Experience Documentation in Assessing Professional Practice or Work Experience: Lessons from Granting Advanced Certification to Health Education Specialists

    ERIC Educational Resources Information Center

    Gambescia, Stephen F.; Lysoby, Linda; Perko, Michael; Sheu, Jiunn-Jye

    2016-01-01

    The purpose of this article is to demonstrate how one profession used an "experience documentation process" to grant advanced certification to qualified certified health education specialists. The competency validation process approved by the certifying organization serves as an example of an additional method, aside from traditional…

  18. Development, Construction, and Content Validation of a Questionnaire to Test Mobile Shower Commode Usability

    PubMed Central

    Theodoros, Deborah G.; Russell, Trevor G.

    2015-01-01

    Background: Usability is an emerging domain of outcomes measurement in assistive technology provision. Currently, no questionnaires exist to test the usability of mobile shower commodes (MSCs) used by adults with spinal cord injury (SCI). Objective: To describe the development, construction, and initial content validation of an electronic questionnaire to test mobile shower commode usability for this population. Methods: The questionnaire was constructed using a mixed-methods approach in 5 phases: determining user preferences for the questionnaire’s format, developing an item bank of usability indicators from the literature and judgement of experts, constructing a preliminary questionnaire, assessing content validity with a panel of experts, and constructing the final questionnaire. Results: The electronic Mobile Shower Commode Assessment Tool Version 1.0 (eMAST 1.0) questionnaire tests MSC features and performance during activities identified using a mixed-methods approach and in consultation with users. It confirms that usability is complex and multidimensional. The final questionnaire contains 25 questions in 3 sections. The eMAST 1.0 demonstrates excellent content validity as determined by a small sample of expert clinicians. Conclusion: The eMAST 1.0 tests usability of MSCs from the perspective of adults with SCI and may be used to solicit feedback during MSC design, assessment, prescription, and ongoing use. Further studies assessing the eMAST’s psychometric properties, including studies with users of MSCs, are needed. PMID:25762862

  19. Quantitative impurity analysis of monoclonal antibody size heterogeneity by CE-LIF: example of development and validation through a quality-by-design framework.

    PubMed

    Michels, David A; Parker, Monica; Salas-Solano, Oscar

    2012-03-01

    This paper describes the framework of quality by design applied to the development, optimization and validation of a sensitive capillary electrophoresis-sodium dodecyl sulfate (CE-SDS) assay for monitoring impurities that potentially impact drug efficacy or patient safety produced in the manufacture of therapeutic MAb products. Drug substance or drug product samples are derivatized with fluorogenic 3-(2-furoyl)quinoline-2-carboxaldehyde and nucleophilic cyanide before separation by CE-SDS coupled to LIF detection. Three design-of-experiments enabled critical labeling parameters to meet method requirements for detecting minor impurities while building precision and robustness into the assay during development. The screening design predicted optimal conditions to control labeling artifacts while two full factorial designs demonstrated method robustness through control of temperature and cyanide parameters within the normal operating range. Subsequent validation according to the guidelines of the International Committee of Harmonization showed the CE-SDS/LIF assay was specific, accurate, and precise (RSD ≤ 0.8%) for relative peak distribution and linear (R > 0.997) between the range of 0.5-1.5 mg/mL with LOD and LOQ of 10 ng/mL and 35 ng/mL, respectively. Validation confirmed the system suitability criteria used as a level of control to ensure reliable method performance. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Fast Vessel Detection in Gaofen-3 SAR Images with Ultrafine Strip-Map Mode

    PubMed Central

    Liu, Lei; Qiu, Xiaolan; Lei, Bin

    2017-01-01

    This study aims to detect vessels with lengths ranging from about 70 to 300 m, in Gaofen-3 (GF-3) SAR images with ultrafine strip-map (UFS) mode as fast as possible. Based on the analysis of the characteristics of vessels in GF-3 SAR imagery, an effective vessel detection method is proposed in this paper. Firstly, the iterative constant false alarm rate (CFAR) method is employed to detect the potential ship pixels. Secondly, the mean-shift operation is applied on each potential ship pixel to identify the candidate target region. During the mean-shift process, we maintain a selection matrix recording which pixels can be taken, and these pixels are called as the valid points of the candidate target. The l1 norm regression is used to extract the principal axis and detect the valid points. Finally, two kinds of false alarms, the bright line and the azimuth ambiguity, are removed by comparing the valid area of the candidate target with a pre-defined value and computing the displacement between the true target and the corresponding replicas respectively. Experimental results on three GF-3 SAR images with UFS mode demonstrate the effectiveness and efficiency of the proposed method. PMID:28678197

  1. A Retrospective Performance Assessment of the Developmental Neurotoxicity Study in Support of OECD Test Guideline 426

    PubMed Central

    Makris, Susan L.; Raffaele, Kathleen; Allen, Sandra; Bowers, Wayne J.; Hass, Ulla; Alleva, Enrico; Calamandrei, Gemma; Sheets, Larry; Amcoff, Patric; Delrue, Nathalie; Crofton, Kevin M.

    2009-01-01

    Objective We conducted a review of the history and performance of developmental neurotoxicity (DNT) testing in support of the finalization and implementation of Organisation of Economic Co-operation and Development (OECD) DNT test guideline 426 (TG 426). Information sources and analysis In this review we summarize extensive scientific efforts that form the foundation for this testing paradigm, including basic neurotoxicology research, interlaboratory collaborative studies, expert workshops, and validation studies, and we address the relevance, applicability, and use of the DNT study in risk assessment. Conclusions The OECD DNT guideline represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points. The test methods used have been subjected to an extensive history of international validation, peer review, and evaluation, which is contained in the public record. The reproducibility, reliability, and sensitivity of these methods have been demonstrated, using a wide variety of test substances, in accordance with OECD guidance on the validation and international acceptance of new or updated test methods for hazard characterization. Multiple independent, expert scientific peer reviews affirm these conclusions. PMID:19165382

  2. Quantitative Acylcarnitine Determination by UHPLC-MS/MS – Going Beyond Tandem MS Acylcarnitine “Profiles”

    PubMed Central

    Minkler, Paul E.; Stoll, Maria S.K.; Ingalls, Stephen T.; Kerner, Janos; Hoppel, Charles L.

    2016-01-01

    Tandem MS “profiling” of acylcarnitines and amino acids was conceived as a first-tier screening method, and its application to expanded newborn screening has been enormously successful. However, unlike amino acid screening (which uses amino acid analysis as its second-tier validation of screening results), acylcarnitine “profiling” also assumed the role of second-tier validation, due to the lack of a generally accepted second-tier acylcarnitine determination method. In this report, we present results from the application of our validated UHPLC-MS/MS second-tier method for the quantification of total carnitine, free carnitine, butyrobetaine, and acylcarnitines to patient samples with known diagnoses: malonic acidemia, short-chain acyl-CoA dehydrogenase deficiency (SCADD) or isobutyryl-CoA dehydrogenase deficiency (IBD), 3-methyl-crotonyl carboxylase deficiency (3-MCC) or β-ketothiolase deficiency (BKT), and methylmalonic acidemia (MMA). We demonstrate the assay’s ability to separate constitutional isomers and diastereomeric acylcarnitines and generate values with a high level of accuracy and precision. These capabilities are unavailable when using tandem MS “profiles”. We also show examples of research interest, where separation of acylcarnitine species and accurate and precise acylcarnitine quantification is necessary. PMID:26458767

  3. Simultaneous quantification of nicotine, opioids, cocaine, and metabolites in human fetal postmortem brain by liquid chromatography tandem mass spectrometry

    PubMed Central

    Shakleya, Diaa M.

    2011-01-01

    A validated method for simultaneous LCMSMS quantification of nicotine, cocaine, 6-acetylmorphine (6AM), codeine, and metabolites in 100 mg fetal human brain was developed and validated. After homogenization and solid-phase extraction, analytes were resolved on a Hydro-RP analytical column with gradient elution. Empirically determined linearity was from 5–5,000 pg/mg for cocaine and benzoylecgonine (BE), 25–5,000 pg/mg for cotinine, ecgonine methyl ester (EME) and 6AM, 50–5000 pg/mg for trans-3-hydroxycotinine (OH-cotinine) and codeine, and 250–5,000 pg/mg for nicotine. Potential endogenous and exogenous interferences were resolved. Intra- and inter-assay analytical recoveries were ≥92%, intra- and inter-day and total assay imprecision were ≤14% RSD and extraction efficiencies were ≥67.2% with ≤83% matrix effect. Method applicability was demonstrated with a postmortem fetal brain containing 40 pg/mg cotinine, 65 pg/mg OH-cotinine, 13 pg/mg cocaine, 34 pg/mg EME, and 525 pg/mg BE. This validated method is useful for determination of nicotine, opioid, and cocaine biomarkers in brain. PMID:19229524

  4. Validation of the Abdominal Pain Index using a revised scoring method.

    PubMed

    Laird, Kelsey T; Sherman, Amanda L; Smith, Craig A; Walker, Lynn S

    2015-06-01

    Evaluate the psychometric properties of child- and parent-report versions of the four-item Abdominal Pain Index (API) in children with functional abdominal pain (FAP) and healthy controls, using a revised scoring method that facilitates comparisons of scores across samples and time. Pediatric patients aged 8-18 years with FAP and controls completed the API at baseline (N = 1,967); a subset of their parents (N = 290) completed the API regarding the child's pain. Subsets of patients completed follow-up assessments at 2 weeks (N = 231), 3 months (N = 330), and 6 months (N = 107). Subsets of both patients (N = 389) and healthy controls (N = 172) completed a long-term follow-up assessment (mean age at follow-up = 20.21 years, SD = 3.75). The API demonstrated good concurrent, discriminant, and construct validity, as well as good internal consistency. We conclude that the API, using the revised scoring method, is a useful, reliable, and valid measure of abdominal pain severity. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Reliability and concurrent validity of a Smartphone, bubble inclinometer and motion analysis system for measurement of hip joint range of motion.

    PubMed

    Charlton, Paula C; Mentiplay, Benjamin F; Pua, Yong-Hao; Clark, Ross A

    2015-05-01

    Traditional methods of assessing joint range of motion (ROM) involve specialized tools that may not be widely available to clinicians. This study assesses the reliability and validity of a custom Smartphone application for assessing hip joint range of motion. Intra-tester reliability with concurrent validity. Passive hip joint range of motion was recorded for seven different movements in 20 males on two separate occasions. Data from a Smartphone, bubble inclinometer and a three dimensional motion analysis (3DMA) system were collected simultaneously. Intraclass correlation coefficients (ICCs), coefficients of variation (CV) and standard error of measurement (SEM) were used to assess reliability. To assess validity of the Smartphone application and the bubble inclinometer against the three dimensional motion analysis system, intraclass correlation coefficients and fixed and proportional biases were used. The Smartphone demonstrated good to excellent reliability (ICCs>0.75) for four out of the seven movements, and moderate to good reliability for the remaining three movements (ICC=0.63-0.68). Additionally, the Smartphone application displayed comparable reliability to the bubble inclinometer. The Smartphone application displayed excellent validity when compared to the three dimensional motion analysis system for all movements (ICCs>0.88) except one, which displayed moderate to good validity (ICC=0.71). Smartphones are portable and widely available tools that are mostly reliable and valid for assessing passive hip range of motion, with potential for large-scale use when a bubble inclinometer is not available. However, caution must be taken in its implementation as some movement axes demonstrated only moderate reliability. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. FUNCTIONAL PERFORMANCE TESTING OF THE HIP IN ATHLETES: A SYSTEMATIC REVIEW FOR RELIABILITY AND VALIDITY

    PubMed Central

    Martin, RobRoy L.

    2012-01-01

    Purpose/Background: The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. Methods: A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. Results: The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Conclusions: Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. Level of Evidence: 2b (Systematic Review of Literature) PMID:22893860

  7. Validation of a single summary score for the Prolapse/Incontinence Sexual Questionnaire-IUGA revised (PISQ-IR).

    PubMed

    Constantine, Melissa L; Pauls, Rachel N; Rogers, Rebecca R; Rockwood, Todd H

    2017-12-01

    The Prolapse/Incontinence Sexual Questionnaire-International Urogynecology Association (IUGA) Revised (PISQ-IR) measures sexual function in women with pelvic floor disorders (PFDs) yet is unwieldy, with six individual subscale scores for sexually active women and four for women who are not. We hypothesized that a valid and responsive summary score could be created for the PISQ-IR. Item response data from participating women who completed a revised version of the PISQ-IR at three clinical sites were used to generate item weights using a magnitude estimation (ME) and Q-sort (Q) approaches. Item weights were applied to data from the original PISQ-IR validation to generate summary scores. Correlation and factor analysis methods were used to evaluate validity and responsiveness of summary scores. Weighted and nonweighted summary scores for the sexually active PISQ-IR demonstrated good criterion validity with condition-specific measures: Incontinence Severity Index = 0.12, 0.11, 0.11; Pelvic Floor Distress Inventory-20 = 0.39, 0.39, 0.12; Epidemiology of Prolapse and Incontinence Questionnaire-Q35 = 0.26 0,.25, 0.40); Female Sexual Functioning Index subscale total score = 0.72, 0.75, 0.72 for nonweighted, ME, and Q summary scores, respectively. Responsiveness evaluation showed weighted and nonweighted summary scores detected moderate effect sizes (Cohen's d > 0.5). Weighted items for those NSA demonstrated significant floor effects and did not meet criterion validity. A PISQ-IR summary score for use with sexually active women, nonweighted or calculated with ME or Q item weights, is a valid and reliable measure for clinical use. The summary scores provide value for assesing clinical treatment of pelvic floor disorders.

  8. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  9. Numerical solutions to the time-dependent Bloch equations revisited.

    PubMed

    Murase, Kenya; Tanki, Nobuyoshi

    2011-01-01

    The purpose of this study was to demonstrate a simple and fast method for solving the time-dependent Bloch equations. First, the time-dependent Bloch equations were reduced to a homogeneous linear differential equation, and then a simple equation was derived to solve it using a matrix operation. The validity of this method was investigated by comparing with the analytical solutions in the case of constant radiofrequency irradiation. There was a good agreement between them, indicating the validity of this method. As a further example, this method was applied to the time-dependent Bloch equations in the two-pool exchange model for chemical exchange saturation transfer (CEST) or amide proton transfer (APT) magnetic resonance imaging (MRI), and the Z-spectra and asymmetry spectra were calculated from their solutions. They were also calculated using the fourth/fifth-order Runge-Kutta-Fehlberg (RKF) method for comparison. There was also a good agreement between them, and this method was much faster than the RKF method. In conclusion, this method will be useful for analyzing the complex CEST or APT contrast mechanism and/or investigating the optimal conditions for CEST or APT MRI. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. A multi-frequency inverse-phase error compensation method for projector nonlinear in 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Mao, Cuili; Lu, Rongsheng; Liu, Zhijian

    2018-07-01

    In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.

  11. Robust Measurements of Phase Response Curves Realized via Multicycle Weighted Spike-Triggered Averages

    NASA Astrophysics Data System (ADS)

    Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio

    2017-02-01

    Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.

  12. Accurate quantification of fluorescent targets within turbid media based on a decoupled fluorescence Monte Carlo model.

    PubMed

    Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming

    2015-07-01

    We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.

  13. Validation of the ArthroS virtual reality simulator for arthroscopic skills.

    PubMed

    Stunt, J J; Kerkhoffs, G M M J; van Dijk, C N; Tuijthof, G J M

    2015-11-01

    Virtual reality simulator training has become important for acquiring arthroscopic skills. A new simulator for knee arthroscopy ArthroS™ has been developed. The purpose of this study was to demonstrate face and construct validity, executed according to a protocol used previously to validate arthroscopic simulators. Twenty-seven participants were divided into three groups having different levels of arthroscopic experience. Participants answered questions regarding general information and the outer appearance of the simulator for face validity. Construct validity was assessed with one standardized navigation task. Face validity, educational value and user friendliness were further determined by giving participants three exercises and by asking them to fill out the questionnaire. Construct validity was demonstrated between experts and beginners. Median task times were not significantly different for all repetitions between novices and intermediates, and between intermediates and experts. Median face validity was 8.3 for the outer appearance, 6.5 for the intra-articular joint and 4.7 for surgical instruments. Educational value and user friendliness were perceived as nonsatisfactory, especially because of the lack of tactile feedback. The ArthroS™ demonstrated construct validity between novices and experts, but did not demonstrate full face validity. Future improvements should be mainly focused on the development of tactile feedback. It is necessary that a newly presented simulator is validated to prove it actually contributes to proficiency of skills.

  14. SLDAssay: A software package and web tool for analyzing limiting dilution assays.

    PubMed

    Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G

    2017-11-01

    Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Development and validation of LC-MS/MS methods for the determination of mirabegron and its metabolites in human plasma and their application to a clinical pharmacokinetic study.

    PubMed

    Teijlingen, Raymond van; Meijer, John; Takusagawa, Shin; Gelderen, Marcel van; Beld, Cas van den; Usui, Takashi

    2012-03-01

    Mirabegron is being developed for the treatment of overactive bladder. To support the development of mirabegron, including pharmacokinetic studies, liquid chromatography/tandem mass spectrometry methods for mirabegron and eight metabolites (M5, M8, M11-M16) were developed and validated for heparinized human plasma containing sodium fluoride. Four separate bioanalytical methods were developed for the analysis of: (1) mirabegron; (2) M5 and M16; (3) M8; and (4) M11-M15. Either solid-phase extraction or liquid-liquid extraction was used to extract the analytes of interest from matrix constituents. For mirabegron, an Inertsil C₈-3 analytical column was used and detection was performed using a triple-quad mass spectrometer equipped with an atmospheric pressure chemical ionization interface. For the metabolite assays, chromatographic separation was performed through a Phenomenex Synergi Fusion-RP C₁₈ analytical column and detection was performed using a triple-quad mass spectrometer equipped with a Heated Electrospray Ionization interface. The validation results demonstrated that the developed liquid chromatography/tandem mass spectrometry methods were precise, accurate, and selective for the determination of mirabegron and its metabolites in human plasma. All methods were successfully applied in evaluating the pharmacokinetic parameters of mirabegron and metabolites in human plasma. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The validation & verification of an LC/MS method for the determination of total docosahexaenoic acid concentrations in canine blood serum.

    PubMed

    Dillon, Gerald Patrick; Keegan, Jason D; Wallace, Geoff; Yiannikouris, Alexandros; Moran, Colm Anthony

    2018-06-01

    Docosahexaenoic acid (DHA), is an omega 3 fatty acid (n-3 FA) that has been shown to play a role in canine growth and physiological integrity and improvements in skin and coat condition. However, potential adverse effects of n-3 FA specifically, impaired cellular immunity has been observed in dogs fed diets with elevated levels of n-3 FA. As such, a safe upper limit (SUL) for total n-3 FAs (DHA and EPA) in dogs has been established. Considering this SUL, sensitive methods detecting DHA in blood serum as a biomarker when conducting n-3 FA supplementation trials involving dogs are required. In this study, an LC-ESI-MS/MS method of DHA detection in dog serum was validated and verified. Recovery of DHA was optimized and parallelism tests were conducted with spiked samples demonstrating that the serum matrix did not interfere with quantitation. The stability of DHA in serum was also investigated, with -80 °C considered suitable when storing samples for up to six months. The method was linear over a calibration range of 1-500 μg/mL and precision and accuracy were found to meet the requirements for validation. This method was verified in an alternative laboratory using a different analytical system and operator, with the results meeting the criteria for verification. Copyright © 2018. Published by Elsevier Inc.

  17. Convergent validity between a discrete choice experiment and a direct, open-ended method: comparison of preferred attribute levels and willingness to pay estimates.

    PubMed

    Marjon van der Pol; Shiell, Alan; Au, Flora; Johnston, David; Tough, Suzanne

    2008-12-01

    The Discrete Choice Experiment (DCE) has become increasingly popular as a method for eliciting patient or population preferences. If DCE estimates are to inform health policy, it is crucial that the answers they provide are valid. Convergent validity is tested in this paper by comparing the results of a DCE exercise with the answers obtained from direct, open-ended questions. The two methods are compared in terms of preferred attribute levels and willingness to pay (WTP) values. Face-to-face interviews were held with 292 women in Calgary, Canada. Similar values were found between the two methods with respect to preferred levels for two out of three of the attributes examined. The DCE predicted less well for levels outside the range than for levels inside the range reaffirming the importance of extensive piloting to ensure appropriate level range in DCEs. The mean WTP derived from the open-ended question was substantially lower than the mean derived from the DCE. However, the two sets of willingness to pay estimates were consistent with each other in that individuals who were willing to pay more in the open-ended question were also willing to pay more in the DCE. The difference in mean WTP values between the two approaches (direct versus DCE) demonstrates the importance of continuing research into the different biases present across elicitation methods.

  18. A physics based method for combining multiple anatomy models with application to medical simulation.

    PubMed

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  19. A conflict management scale for pharmacy.

    PubMed

    Austin, Zubin; Gregory, Paul A; Martin, Craig

    2009-11-12

    To develop and establish the validity and reliability of a conflict management scale specific to pharmacy practice and education. A multistage inventory-item development process was undertaken involving 93 pharmacists and using a previously described explanatory model for conflict in pharmacy practice. A 19-item inventory was developed, field tested, and validated. The conflict management scale (CMS) demonstrated an acceptable degree of reliability and validity for use in educational or practice settings to promote self-reflection and self-awareness regarding individuals' conflict management styles. The CMS provides a unique, pharmacy-specific method for individuals to determine and reflect upon their own conflict management styles. As part of an educational program to facilitate self-reflection and heighten self-awareness, the CMS may be a useful tool to promote discussions related to an important part of pharmacy practice.

  20. Quantitative phase microscopy via optimized inversion of the phase optical transfer function.

    PubMed

    Jenkins, Micah H; Gaylord, Thomas K

    2015-10-01

    Although the field of quantitative phase imaging (QPI) has wide-ranging biomedical applicability, many QPI methods are not well-suited for such applications due to their reliance on coherent illumination and specialized hardware. By contrast, methods utilizing partially coherent illumination have the potential to promote the widespread adoption of QPI due to their compatibility with microscopy, which is ubiquitous in the biomedical community. Described herein is a new defocus-based reconstruction method that utilizes a small number of efficiently sampled micrographs to optimally invert the partially coherent phase optical transfer function under assumptions of weak absorption and slowly varying phase. Simulation results are provided that compare the performance of this method with similar algorithms and demonstrate compatibility with large phase objects. The accuracy of the method is validated experimentally using a microlens array as a test phase object. Lastly, time-lapse images of live adherent cells are obtained with an off-the-shelf microscope, thus demonstrating the new method's potential for extending QPI capability widely in the biomedical community.

  1. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.

  2. On the short circuit resilience of organic solar cells: prediction and validation.

    PubMed

    Oostra, A Jolt; Smits, Edsger C P; de Leeuw, Dago M; Blom, Paul W M; Michels, Jasper J

    2015-09-07

    The operational characteristics of organic solar cells manufactured with large area processing methods suffers from the occurrence of short-circuits due to defects in the photoactive thin film stack. In this work we study the effect of a shunt resistance on an organic solar cell and demonstrate that device performance is not affected negatively as long as the shunt resistance is higher than approximately 1000 Ohm. By studying charge transport across PSS-lithium fluoride/aluminum (LiF/Al) shunting junctions we show that this prerequisite is already met by applying a sufficiently thick (>1.5 nm) LiF layer. We demonstrate that this remarkable shunt-resilience stems from the formation of a significant charge transport barrier at the PSS-LiF/Al interface. We validate our predictions by fabricating devices with deliberately severed photoactive layers and find an excellent agreement between the calculated and experimental current-voltage characteristics.

  3. A discrete event simulation tool to support and predict hospital and clinic staffing.

    PubMed

    DeRienzo, Christopher M; Shaw, Ryan J; Meanor, Phillip; Lada, Emily; Ferranti, Jeffrey; Tanaka, David

    2017-06-01

    We demonstrate how to develop a simulation tool to help healthcare managers and administrators predict and plan for staffing needs in a hospital neonatal intensive care unit using administrative data. We developed a discrete event simulation model of nursing staff needed in a neonatal intensive care unit and then validated the model against historical data. The process flow was translated into a discrete event simulation model. Results demonstrated that the model can be used to give a respectable estimate of annual admissions, transfers, and deaths based upon two different staffing levels. The discrete event simulation tool model can provide healthcare managers and administrators with (1) a valid method of modeling patient mix, patient acuity, staffing needs, and costs in the present state and (2) a forecast of how changes in a unit's staffing, referral patterns, or patient mix would affect a unit in a future state.

  4. Confirmatory Factor Analysis Alternative: Free, Accessible CBID Software.

    PubMed

    Bott, Marjorie; Karanevich, Alex G; Garrard, Lili; Price, Larry R; Mudaranthakam, Dinesh Pal; Gajewski, Byron

    2018-02-01

    New software that performs Classical and Bayesian Instrument Development (CBID) is reported that seamlessly integrates expert (content validity) and participant data (construct validity) to produce entire reliability estimates with smaller sample requirements. The free CBID software can be accessed through a website and used by clinical investigators in new instrument development. Demonstrations are presented of the three approaches using the CBID software: (a) traditional confirmatory factor analysis (CFA), (b) Bayesian CFA using flat uninformative prior, and (c) Bayesian CFA using content expert data (informative prior). Outcomes of usability testing demonstrate the need to make the user-friendly, free CBID software available to interdisciplinary researchers. CBID has the potential to be a new and expeditious method for instrument development, adding to our current measurement toolbox. This allows for the development of new instruments for measuring determinants of health in smaller diverse populations or populations of rare diseases.

  5. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  6. L(sub 1) Adaptive Flight Control System: Flight Evaluation and Technology Transition

    NASA Technical Reports Server (NTRS)

    Xargay, Enric; Hovakimyan, Naira; Dobrokhodov, Vladimir; Kaminer, Isaac; Gregory, Irene M.; Cao, Chengyu

    2010-01-01

    Certification of adaptive control technologies for both manned and unmanned aircraft represent a major challenge for current Verification and Validation techniques. A (missing) key step towards flight certification of adaptive flight control systems is the definition and development of analysis tools and methods to support Verification and Validation for nonlinear systems, similar to the procedures currently used for linear systems. In this paper, we describe and demonstrate the advantages of L(sub l) adaptive control architectures for closing some of the gaps in certification of adaptive flight control systems, which may facilitate the transition of adaptive control into military and commercial aerospace applications. As illustrative examples, we present the results of a piloted simulation evaluation on the NASA AirSTAR flight test vehicle, and results of an extensive flight test program conducted by the Naval Postgraduate School to demonstrate the advantages of L(sub l) adaptive control as a verifiable robust adaptive flight control system.

  7. The Development of Methodologies and Solvent Systems to Replace CFC-113 in the Validation of Large-Scale Spacecraft Hardware

    NASA Technical Reports Server (NTRS)

    Clausen, Christian A., III

    1996-01-01

    Liquid oxygen is used as the oxidizer for the liquid fueled main engines during the launch of the space shuttle. Any hardware that comes into contact with pure oxygen either during servicing of the shuttle or in the operation of the shuttle must be validated as being free of nonvolatile residue (NVR). This is a safety requirement to prevent spontaneous combustion of carbonaceous NVR if it was to come into contact with pure oxygen. Previous NVR validation testing of space hardware used Freon (CFC-113) as the test solvent. Because CFC-113 no longer can be used, a program was conducted to develop a NVR test procedure that uses a safe environmentally friendly solvent. The solvent that has been used in the new NVR test procedure is water. Work that has been conducted over the past three years has served to demonstrate that when small parts are subjected to ultrasound in a water bath and NVR is present a sufficient quantity is dispersed into the water to analyze for its concentration by the TOC method. The work that is described in this report extends the water wash NVR validation test to large-scale parts; that is, parts too large to be subjected to ultrasound. The method consists of concentrating the NVR in the water wash onto a bed of silica gel. The total adsorbent bed is then analyzed for TOC content by using a solid sample probe. Work that has been completed thus far has demonstrated that hydrocarbon based NVR's can be detected at levels of less than 0.1 mg per square foot of part's surface area by using a simple water wash.

  8. [Multicenter validation of an evaluation tool for clinical training activities (SVAT) of the nursing students].

    PubMed

    Finotto, Sergio; Gradellini, Cinzia; Bandini, Stefania; Burrai, Francesco; Lucchi Casadei, Sandra; Villani, Carolina; Vincenzi, Simone; Mecugni, Daniela

    2017-01-01

    To evaluate the psychometric characteristics of the Scheda di Valutazione delle Attività di Tirocinio (SVAT). The degree courses in Nursing of the University of Modena and Reggio Emilia, site of Reggio Emilia, the University of Bologna Formative Section BO1, Imola and training center of Cesena, the University of Ferrara training centers of Ferrara and Codigoro were all enrolled in the research. For the content validation the reactive Delphi method was chosen. The panel of experts expressed a qualitative-intuitive judgment on the adequacy of language and on the stimulus material (SVAT). For internal consistency Cronbach's alpha was calculated the. The test-retest method was used for the reliability of stability. all indicators of the SVAT have achieved a degree of consensus not less than 80% demonstrating its content validity. The face validity is demonstrated by an average score equal to or greater than 7 obtained by all indicators. The reliability of internal consistency of the SVAT was appraised by Cronbach's alpha that was 0.987 for the entire instrument. The reliability of the stability has been calculated through the correlation's coefficient expressed by Pearson's r that was 0.983 (p = 1.3E-198). in Italy there is no a "gold standard" tool to evaluate the clinical performance of nursing students during and at the end of their clinical training. The SVAT proves to be a valuable and reliable tool it furthermore could stimulate the discussion and the debate among educators and nurses, so that also in our country, it may be possible develop and refine tools that support the evaluation of clinical skills of nursing students.

  9. Computational discovery and in vivo validation of hnf4 as a regulatory gene in planarian regeneration.

    PubMed

    Lobo, Daniel; Morokuma, Junji; Levin, Michael

    2016-09-01

    Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less

  11. Validity as a social imperative for assessment in health professions education: a concept analysis.

    PubMed

    Marceau, Mélanie; Gallagher, Frances; Young, Meredith; St-Onge, Christina

    2018-06-01

    Assessment can have far-reaching consequences for future health care professionals and for society. Thus, it is essential to establish the quality of assessment. Few modern approaches to validity are well situated to ensure the quality of complex assessment approaches, such as authentic and programmatic assessments. Here, we explore and delineate the concept of validity as a social imperative in the context of assessment in health professions education (HPE) as a potential framework for examining the quality of complex and programmatic assessment approaches. We conducted a concept analysis using Rodgers' evolutionary method to describe the concept of validity as a social imperative in the context of assessment in HPE. Supported by an academic librarian, we developed and executed a search strategy across several databases for literature published between 1995 and 2016. From a total of 321 citations, we identified 67 articles that met our inclusion criteria. Two team members analysed the texts using a specified approach to qualitative data analysis. Consensus was achieved through full team discussions. Attributes that characterise the concept were: (i) demonstration of the use of evidence considered credible by society to document the quality of assessment; (ii) validation embedded through the assessment process and score interpretation; (iii) documented validity evidence supporting the interpretation of the combination of assessment findings, and (iv) demonstration of a justified use of a variety of evidence (quantitative and qualitative) to document the quality of assessment strategies. The emerging concept of validity as a social imperative highlights some areas of focus in traditional validation frameworks, whereas some characteristics appear unique to HPE and move beyond traditional frameworks. The study reflects the importance of embedding consideration for society and societal concerns throughout the assessment and validation process, and may represent a potential lens through which to examine the quality of complex and programmatic assessment approaches. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  12. Monitoring sedation status over time in ICU patients: reliability and validity of the Richmond Agitation-Sedation Scale (RASS).

    PubMed

    Ely, E Wesley; Truman, Brenda; Shintani, Ayumi; Thomason, Jason W W; Wheeler, Arthur P; Gordon, Sharon; Francis, Joseph; Speroff, Theodore; Gautam, Shiva; Margolin, Richard; Sessler, Curtis N; Dittus, Robert S; Bernard, Gordon R

    2003-06-11

    Goal-directed delivery of sedative and analgesic medications is recommended as standard care in intensive care units (ICUs) because of the impact these medications have on ventilator weaning and ICU length of stay, but few of the available sedation scales have been appropriately tested for reliability and validity. To test the reliability and validity of the Richmond Agitation-Sedation Scale (RASS). Prospective cohort study. Adult medical and coronary ICUs of a university-based medical center. Thirty-eight medical ICU patients enrolled for reliability testing (46% receiving mechanical ventilation) from July 21, 1999, to September 7, 1999, and an independent cohort of 275 patients receiving mechanical ventilation were enrolled for validity testing from February 1, 2000, to May 3, 2001. Interrater reliability of the RASS, Glasgow Coma Scale (GCS), and Ramsay Scale (RS); validity of the RASS correlated with reference standard ratings, assessments of content of consciousness, GCS scores, doses of sedatives and analgesics, and bispectral electroencephalography. In 290-paired observations by nurses, results of both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, respectively), which were both superior to the GCS (weighted kappa, 0.64; P<.001 for both comparisons). Criterion validity was tested in 411-paired observations in the first 96 patients of the validation cohort, in whom the RASS showed significant differences between levels of consciousness (P<.001 for all) and correctly identified fluctuations within patients over time (P<.001). In addition, 5 methods were used to test the construct validity of the RASS, including correlation with an attention screening examination (r = 0.78, P<.001), GCS scores (r = 0.91, P<.001), quantity of different psychoactive medication dosages 8 hours prior to assessment (eg, lorazepam: r = - 0.31, P<.001), successful extubation (P =.07), and bispectral electroencephalography (r = 0.63, P<.001). Face validity was demonstrated via a survey of 26 critical care nurses, which the results showed that 92% agreed or strongly agreed with the RASS scoring scheme, and 81% agreed or strongly agreed that the instrument provided a consensus for goal-directed delivery of medications. The RASS demonstrated excellent interrater reliability and criterion, construct, and face validity. This is the first sedation scale to be validated for its ability to detect changes in sedation status over consecutive days of ICU care, against constructs of level of consciousness and delirium, and correlated with the administered dose of sedative and analgesic medications.

  13. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  14. Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.

    PubMed

    Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor

    2013-04-01

    A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.

  15. Validity and reliability of a scale to measure genital body image.

    PubMed

    Zielinski, Ruth E; Kane-Low, Lisa; Miller, Janis M; Sampselle, Carolyn

    2012-01-01

    Women's body image dissatisfaction extends to body parts usually hidden from view--their genitals. Ability to measure genital body image is limited by lack of valid and reliable questionnaires. We subjected a previously developed questionnaire, the Genital Self Image Scale (GSIS) to psychometric testing using a variety of methods. Five experts determined the content validity of the scale. Then using four participant groups, factor analysis was performed to determine construct validity and to identify factors. Further construct validity was established using the contrasting groups approach. Internal consistency and test-retest reliability was determined. Twenty one of 29 items were considered content valid. Two items were added based on expert suggestions. Factor analysis was undertaken resulting in four factors, identified as Genital Confidence, Appeal, Function, and Comfort. The revised scale (GSIS-20) included 20 items explaining 59.4% of the variance. Women indicating an interest in genital cosmetic surgery exhibited significantly lower scores on the GSIS-20 than those who did not. The final 20 item scale exhibited internal reliability across all sample groups as well as test-retest reliability. The GSIS-20 provides a measure of genital body image demonstrating reliability and validity across several populations of women.

  16. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator.

    PubMed

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice

    2018-01-01

    Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.

  17. Psychometric Properties of Translation of the Child Perception Questionnaire (CPQ11-14) in Telugu Speaking Indian Children.

    PubMed

    Kumar, Santhosh; Kroon, Jeroen; Lalloo, Ratilal; Johnson, Newell W

    2016-01-01

    Oral health related quality of life research among children in India is still nascent and no measures have been validated to date. Although CPQ11-14 has been previously used in studies from the Indian sub-continent, the instrument has never been tested for cross-cultural adaptability. This study aimed to assess the validity and reliability of CPQ11-14 in Telugu speaking Indian school children. Primary school children of Medak district, Telangana State, India, were recruited by a multi-stage probability sampling method. The translated questionnaire was initially pilot tested on a small subset of children (n = 40). Children with informed consent from parents (N = 1342) were then provided with questionnaires containing the Telugu translation of CPQ11-14, followed by a clinical examination conducted by a single examiner, using Basic WHO survey methods for dental caries, malocclusion, and Dean's Fluorosis index. Children (n = 161) in randomly chosen schools were re-administered the same questionnaire after a two week interval to test reliability of CPQ11-14 on repeated administrations. Internal consistency and test-retest reliability as determined by Cronbach's alpha and Intra-class correlation coefficient for overall CPQ11-14 scale were 0.925 and 0.923, respectively. CPQ11-14 discriminated between the categories of fluorosis and malocclusion while its discriminant validity with respect to dental caries was limited. CPQ11-14 also demonstrated good construct validity with both overall CPQ11-14 and its subscales having significant positive correlation with global ratings of oral health and overall wellbeing, even after adjusting for confounding variables. CPQ11-14 had a correlation of 0.405 with self-evaluated oral health and 0.407 with self-evaluated impact of oral health on overall wellbeing. In conclusion, Telugu translation of CPQ11-14 demonstrated good internal consistency and excellent reliability on repeated administrations after two weeks. It also exhibited good discriminant and construct validity.

  18. Development and validation of a sensitive thermal desorption-gas chromatography-mass spectrometry (TD-GC-MS) method for the determination of phosgene in air samples.

    PubMed

    Juillet, Y; Dubois, C; Bintein, F; Dissard, J; Bossée, A

    2014-08-01

    A new rapid, sensitive and reliable method was developed for the determination of phosgene in air samples using thermal desorption (TD) followed by gas chromatography-mass spectrometry (GC-MS). The method is based on a fast (10 min) active sampling of only 1 L of air onto a Tenax® GR tube doped with 0.5 mL of derivatizing mixture containing dimercaptotoluene and triethylamine in hexane solution. Validation of the TD-GC-MS method showed a low limit of detection (40 ppbv), acceptable repeatability, intermediate fidelity (relative standard deviation within 12 %) and excellent accuracy (>95%). Linearity was demonstrated for two concentration ranges (0.04 to 2.5 ppmv and 2.5 to 10 ppmv) owing to variation of derivatization recovery between low and high concentration levels. Due to its simple on-site implementation and its close similarity with recommended operating procedure (ROP) for chemical warfare agents vapour sampling, the method is particularly useful in the process of verification of the Chemical Weapons Convention.

  19. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    PubMed Central

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-01-01

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406

  20. Validation and application of an improved method for the rapid determination of proline in grape berries.

    PubMed

    Rienth, Markus; Romieu, Charles; Gregan, Rebecca; Walsh, Caroline; Torregrosa, Laurent; Kelly, Mary T

    2014-04-16

    A rapid and sensitive method is presented for the determination of proline in grape berries. Following acidification with formic acid, proline is derivatized by heating at 100 °C for 15 min with 3% ninhydrin in dimethyl sulfoxide, and the absorbance, which is stable for at least 60 min, is read at 520 nm. The method was statistically validated in the concentration range from 2.5 to 15 mg/L, giving a repeatability and intermediate precision of generally <3%; linearity was determined using the lack of fit test. Results obtained with this method concurred (r = 0.99) with those obtained for the same samples on an amino acid analyzer. In terms of sample preparation, a simple dilution (5-20-fold) is required, and sugars, primary amino acids, and anthocyanins were demonstrated not to interfere, as the latter are bleached by ninhydrin under the experimental conditions. The method was applied to the study of proline accumulation in the fruits of microvines grown in phytotrons, and it was established that proline accumulation and concentrations closely resemble those of field-grown macrovines.

Top