How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Bibliometrics for Social Validation.
Hicks, Daniel J
2016-01-01
This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.
Bibliometrics for Social Validation
2016-01-01
This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974
ERIC Educational Resources Information Center
Nicklas, Theresa A.; O'Neil, Carol E.; Stuff, Janice; Goodell, Lora Suzanne; Liu, Yan; Martin, Corby K.
2012-01-01
Objective: The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in "Head Start." Methods: Preschool children and their caregivers participated in validation (n = 22) and feasibility (n = 24) pilot studies. Validity was determined in the metabolic research unit using…
Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto
2015-07-01
The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.
Barrett, Eva; McCreesh, Karen; Lewis, Jeremy
2014-02-01
A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Validation studies and proficiency testing.
Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert
2002-01-01
Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.
Field validation of the dnph method for aldehydes and ketones. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Workman, G.S.; Steger, J.L.
1996-04-01
A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less
Task Validation for the AN/TPQ-36 Radar System
1978-09-01
report presents the method and results of a study to validate personnel task descriptions for the new AN/TyP-Jb radar...TP.J-Sb KAPAK SVSTKM CONTENTS i ■ l.t |i- INTRODUCTION t METHOD 2 RESULTS, CONCLUSIONS, AND RECOMMENDATIONS b Task Validation 5 26B MOS... method , results, conclusions, and recommendations of the validation study. The appendixes contain the following: 1. Appendix A contains
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique
2017-04-01
An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.
Haddad, Monoem; Stylianides, Georgios; Djaoui, Leo; Dellal, Alexandre; Chamari, Karim
2017-01-01
Purpose: The aim of this review is to (1) retrieve all data validating the Session-rating of perceived exertion (RPE)-method using various criteria, (2) highlight the rationale of this method and its ecological usefulness, and (3) describe factors that can alter RPE and users of this method should take into consideration. Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE. Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10. Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults) among various expertise levels. This method could be used as “standing alone” method for training load (TL) monitoring purposes though some recommend to combine it with other physiological parameters as heart rate. PMID:29163016
Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew
2015-01-01
Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.
Validation of alternative methods for toxicity testing.
Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M
1998-01-01
Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695
Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra
2016-03-01
The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.
VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.
2015-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
VALUE: A framework to validate downscaling approaches for climate change studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.
2015-01-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
Validation of an Instrument to Measure High School Students' Attitudes toward Fitness Testing
ERIC Educational Resources Information Center
Mercier, Kevin; Silverman, Stephen
2014-01-01
Purpose: The purpose of this investigation was to develop an instrument that has scores that are valid and reliable for measuring students' attitudes toward fitness testing. Method: The method involved the following steps: (a) an elicitation study, (b) item development, (c) a pilot study, and (d) a validation study. The pilot study included 427…
Gourmelon, Anne; Delrue, Nathalie
Ten years elapsed since the OECD published the Guidance document on the validation and international regulatory acceptance of test methods for hazard assessment. Much experience has been gained since then in validation centres, in countries and at the OECD on a variety of test methods that were subjected to validation studies. This chapter reviews validation principles and highlights common features that appear to be important for further regulatory acceptance across studies. Existing OECD-agreed validation principles will most likely generally remain relevant and applicable to address challenges associated with the validation of future test methods. Some adaptations may be needed to take into account the level of technique introduced in test systems, but demonstration of relevance and reliability will continue to play a central role as pre-requisite for the regulatory acceptance. Demonstration of relevance will become more challenging for test methods that form part of a set of predictive tools and methods, and that do not stand alone. OECD is keen on ensuring that while these concepts evolve, countries can continue to rely on valid methods and harmonised approaches for an efficient testing and assessment of chemicals.
Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W
2015-01-01
CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Reliability and validity of the AutoCAD software method in lumbar lordosis measurement
Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe
2011-01-01
Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681
Practical Aspects of Designing and Conducting Validation Studies Involving Multi-study Trials.
Coecke, Sandra; Bernasconi, Camilla; Bowe, Gerard; Bostroem, Ann-Charlotte; Burton, Julien; Cole, Thomas; Fortaner, Salvador; Gouliarmou, Varvara; Gray, Andrew; Griesinger, Claudius; Louhimies, Susanna; Gyves, Emilio Mendoza-de; Joossens, Elisabeth; Prinz, Maurits-Jan; Milcamps, Anne; Parissis, Nicholaos; Wilk-Zasadna, Iwona; Barroso, João; Desprez, Bertrand; Langezaal, Ingrid; Liska, Roman; Morath, Siegfried; Reina, Vittorio; Zorzoli, Chiara; Zuang, Valérie
This chapter focuses on practical aspects of conducting prospective in vitro validation studies, and in particular, by laboratories that are members of the European Union Network of Laboratories for the Validation of Alternative Methods (EU-NETVAL) that is coordinated by the EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM). Prospective validation studies involving EU-NETVAL, comprising a multi-study trial involving several laboratories or "test facilities", typically consist of two main steps: (1) the design of the validation study by EURL ECVAM and (2) the execution of the multi-study trial by a number of qualified laboratories within EU-NETVAL, coordinated and supported by EURL ECVAM. The approach adopted in the conduct of these validation studies adheres to the principles described in the OECD Guidance Document on the Validation and International Acceptance of new or updated test methods for Hazard Assessment No. 34 (OECD 2005). The context and scope of conducting prospective in vitro validation studies is dealt with in Chap. 4 . Here we focus mainly on the processes followed to carry out a prospective validation of in vitro methods involving different laboratories with the ultimate aim of generating a dataset that can support a decision in relation to the possible development of an international test guideline (e.g. by the OECD) or the establishment of performance standards.
Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie
This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.
Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth
2014-05-01
In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method of assessing and reporting whether items assess the intended theoretical construct and only that construct. In three studies, DCV was applied to measures of illness perceptions, control cognitions, and theory of planned behaviour response formats. Appendix S1 gives content validity indices for each item of each questionnaire investigated. Discriminant content validity is ideally applied while the measure is being developed, before using to measure the construct(s), but can also be applied after using a measure. © 2014 The British Psychological Society.
ERIC Educational Resources Information Center
Rahman, Nurulhuda Abd; Masuwai, Azwani; Tajudin, Nor'ain Mohd; Tek, Ong Eng; Adnan, Mazlini
2016-01-01
Purpose: This study was aimed at establishing, through the validation of the "Teaching and Learning Guiding Principles Instrument" (TLGPI), the validity and reliability of the underlying factor structure of the Teaching and Learning Guiding Principles (TLGP) generated by a previous study. Method: A survey method was used to collect data…
Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.
Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe
2011-12-01
The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.
Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria del Carmen Bisi
2016-01-01
Introduction Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. Objective To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. Subjects/Methods A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. Results The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food. PMID:28030625
VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS
A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...
VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS
A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study
ERIC Educational Resources Information Center
Ogilvie, Emily; McCrudden, Matthew T.
2017-01-01
An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…
Prediction of adult height in girls: the Beunen-Malina-Freitas method.
Beunen, Gaston P; Malina, Robert M; Freitas, Duarte L; Thomis, Martine A; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Maes, Hermine H; Lefevre, Johan
2011-12-01
The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10-15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R (2) regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R (2)) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12-15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12-15 years. It is applicable to European populations or populations of European ancestry.
An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle
NASA Astrophysics Data System (ADS)
Wang, Yue; Gao, Dan; Mao, Xuming
2018-03-01
A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.
Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.
2014-01-01
Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155
Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto
2015-07-01
The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.
[Selection of risk and diagnosis in diabetic polyneuropathy. Validation of method of new systems].
Jurado, Jerónimo; Caula, Jacinto; Pou i Torelló, Josep Maria
2006-06-30
In a previous study we developed a specific algorithm, the polyneuropathy selection method (PSM) with 4 parameters (age, HDL-C, HbA1c, and retinopathy), to select patients at risk of diabetic polyneuropathy (DPN). We also developed a simplified method for DPN diagnosis: outpatient polyneuropathy diagnosis (OPD), with 4 variables (symptoms and 3 objective tests). To confirm the validity of conventional tests for DPN diagnosis; to validate the discriminatory power of the PSM and the diagnostic value of OPD by evaluating their relationship to electrodiagnosis studies and objective clinical neurological assessment; and to evaluate the correlation of DPN and pro-inflammatory status. Cross-sectional, crossed association for PSM validation. Paired samples for OPD validation. Primary care in 3 counties. Random sample of 75 subjects from the type-2 diabetes census for PSM evaluation. Thirty DPN patients and 30 non-DPN patients (from 2 DM2 sub-groups in our earlier study) for OPD evaluation. The gold standard for DPN diagnosis will be studied by means of a clinical neurological study (symptoms, physical examination, and sensitivity tests) and electrodiagnosis studies (sensitivity and motor EMG). Risks of neuropathy, macroangiopathy and pro-inflammatory status (PCR, TNF soluble fraction and total TGF-beta1) will be studied in every subject. Electrodiagnosis studies should confirm the validity of conventional tests for DPN diagnosis. PSM and OPD will be valid methods for selecting patients at risk and diagnosing DPN. There will be a significant relationship between DPN and pro-inflammatory tests.
The Value of Qualitative Methods in Social Validity Research
ERIC Educational Resources Information Center
Leko, Melinda M.
2014-01-01
One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
In 't Veld, P H; van der Laak, L F J; van Zon, M; Biesta-Peters, E G
2018-04-12
A method for the quantification of the Bacillus cereus emetic toxin (cereulide) was developed and validated. The method principle is based on LC-MS as this is the most sensitive and specific method for cereulide. Therefore the study design is different from the microbiological methods validated under this mandate. As the method had to be developed a two stage validation study approach was used. The first stage (pre-study) focussed on the method applicability and the experience of the laboratories with the method. Based on the outcome of the pre-study and comments received during voting at CEN and ISO level a final method was agreed to be used for the second stage the (final) validation of the method. In the final (validation) study samples of cooked rice (both artificially contaminated with cereulide or contaminated with B. cereus for production of cereulide in the rice) and 6 other food matrices (fried rice dish, cream pastry with chocolate, hotdog sausage, mini pancakes, vanilla custard and infant formula) were used. All these samples were spiked by the participating laboratories using standard solutions of cereulide supplied by the organising laboratory. The results of the study indicate that the method is fit for purpose. Repeatability values were obtained of 0.6 μg/kg at low level spike (ca. 5 μg/kg) and 7 to 9.6 μg/kg at high level spike (ca. 75 μg/kg). Reproducibility at low spike level ranged from 0.6 to 0.9 μg/kg and from 8.7 to 14.5 μg/kg at high spike level. Recovery from the spiked samples ranged between 96.5% for mini-pancakes to 99.3% for fries rice dish. Copyright © 2018. Published by Elsevier B.V.
Patterson, Fiona; Lopes, Safiatu; Harding, Stephen; Vaux, Emma; Berkin, Liz; Black, David
2017-02-01
The aim of this study was to follow up a sample of physicians who began core medical training (CMT) in 2009. This paper examines the long-term validity of CMT and GP selection methods in predicting performance in the Membership of Royal College of Physicians (MRCP(UK)) examinations. We performed a longitudinal study, examining the extent to which the GP and CMT selection methods (T1) predict performance in the MRCP(UK) examinations (T2). A total of 2,569 applicants from 2008-09 who completed CMT and GP selection methods were included in the study. Looking at MRCP(UK) part 1, part 2 written and PACES scores, both CMT and GP selection methods show evidence of predictive validity for the outcome variables, and hierarchical regressions show the GP methods add significant value to the CMT selection process. CMT selection methods predict performance in important outcomes and have good evidence of validity; the GP methods may have an additional role alongside the CMT selection methods. © Royal College of Physicians 2017. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-10
... validation studies. NICEATM and ICCVAM work collaboratively to evaluate new and improved test methods... for nomination of test methods for validation studies, and guidelines for submission of test methods... for human and veterinary vaccine post-licensing potency and safety testing. Plenary and breakout...
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
Validation of asthma recording in electronic health records: a systematic review
Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J
2017-01-01
Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227
The construct validity of session RPE during an intensive camp in young male Karate athletes.
Padulo, Johnny; Chaabène, Helmi; Tabben, Montassar; Haddad, Monoem; Gevat, Cecilia; Vando, Stefano; Maurino, Lucio; Chaouachi, Anis; Chamari, Karim
2014-04-01
the aim of this study was to assess the validity of the session rating of perceived exertion (RPE) method and two objective HR-based methods for quantifying karate's training load (TL) in young Karatekas. eleven athletes (age 12.50±1.84 years) participated in this study. The training period/camp was performed on 5 consecutive days with two training session (s) per-day (d). Construct validity of RPE method in young Karate athletes, was studied by correlation analysis between RPE session's training load and both Edwards and Banister's training impulse score' method. significant relationship was found between inter-day (n-11 × d-5 × s-2 = 110) sessions RPE and Edwards (r values from 0.84 to 0.92 p < 0.001) and Banister's (r values from 0.84 to 0.97 p < 0.001), respectively. this study showed that session-RPE can be considered a valid method for quantifying karate's training load in young karate athletes.
Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph
2015-05-22
When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.
Fachi, Mariana Millan; Leonart, Letícia Paula; Cerqueira, Letícia Bonancio; Pontes, Flavia Lada Degaut; de Campos, Michel Leandro; Pontarolo, Roberto
2017-06-15
A systematic and critical review was conducted on bioanalytical methods validated to quantify combinations of antidiabetic agents in human blood. The aim of this article was to verify how the validation process of bioanalytical methods is performed and the quality of the published records. The validation assays were evaluated according to international guidelines. The main problems in the validation process are pointed out and discussed to help researchers to choose methods that are truly reliable and can be successfully applied for their intended use. The combination of oral antidiabetic agents was chosen as these are some of the most studied drugs and several methods are present in the literature. Moreover, this article may be applied to the validation process of all bioanalytical. Copyright © 2017 Elsevier B.V. All rights reserved.
Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria Del Carmen Bisi
2016-01-01
Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food.
Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W
2016-11-01
Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
The construct validity of session RPE during an intensive camp in young male Karate athletes
Padulo, Johnny; Chaabène, Helmi; Tabben, Montassar; Haddad, Monoem; Gevat, Cecilia; Vando, Stefano; Maurino, Lucio; Chaouachi, Anis; Chamari, Karim
2014-01-01
Summary Background: the aim of this study was to assess the validity of the session rating of perceived exertion (RPE) method and two objective HR-based methods for quantifying karate’s training load (TL) in young Karatekas. Methods: eleven athletes (age 12.50±1.84 years) participated in this study. The training period/camp was performed on 5 consecutive days with two training session (s) per-day (d). Construct validity of RPE method in young Karate athletes, was studied by correlation analysis between RPE session’s training load and both Edwards and Banister’s training impulse score’ method. Results: significant relationship was found between inter-day (n-11 × d-5 × s-2 = 110) sessions RPE and Edwards (r values from 0.84 to 0.92 p < 0.001) and Banister’s (r values from 0.84 to 0.97 p < 0.001), respectively Conclusion: this study showed that session-RPE can be considered a valid method for quantifying karate’s training load in young karate athletes. PMID:25332921
A long-term validation of the modernised DC-ARC-OES solid-sample method.
Flórián, K; Hassler, J; Förster, O
2001-12-01
The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.
Barrero, Lope H; Katz, Jeffrey N; Dennerlein, Jack T
2012-01-01
Objectives To describe the relation of the measured validity of self-reported mechanical demands (self-reports) with the quality of validity assessments and the variability of the assessed exposure in the study population. Methods We searched for original articles, published between 1990 and 2008, reporting the validity of self-reports in three major databases: EBSCOhost, Web of Science, and PubMed. Identified assessments were classified by methodological characteristics (eg, type of self-report and reference method) and exposure dimension was measured. We also classified assessments by the degree of comparability between the self-report and the employed reference method, and the variability of the assessed exposure in the study population. Finally, we examined the association of the published validity (r) with this degree of comparability, as well as with the variability of the exposure variable in the study population. Results Of the 490 assessments identified, 75% used observation-based reference measures and 55% tested self-reports of posture duration and movement frequency. Frequently, validity studies did not report demographic information (eg, education, age, and gender distribution). Among assessments reporting correlations as a measure of validity, studies with a better match between the self-report and the reference method, and studies conducted in more heterogeneous populations tended to report higher correlations [odds ratio (OR) 2.03, 95% confidence interval (95% CI) 0.89–4.65 and OR 1.60, 95% CI 0.96–2.61, respectively]. Conclusions The reported data support the hypothesis that validity depends on study-specific factors often not examined. Experimentally manipulating the testing setting could lead to a better understanding of the capabilities and limitations of self-reported information. PMID:19562235
Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko
2018-03-01
The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.
An empirical assessment of validation practices for molecular classifiers
Castaldi, Peter J.; Dahabreh, Issa J.
2011-01-01
Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697
Content Validity of National Post Marriage Educational Program Using Mixed Methods
MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali
2015-01-01
Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672
ERIC Educational Resources Information Center
Luyt, Russell
2012-01-01
A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…
Aandstad, Anders; Holtberget, Kristian; Hageberg, Rune; Holme, Ingar; Anderssen, Sigmund A
2014-02-01
Previous studies show that body composition is related to injury risk and physical performance in soldiers. Thus, valid methods for measuring body composition in military personnel are needed. The frequently used body mass index method is not a valid measure of body composition in soldiers, but reliability and validity of alternative field methods are less investigated in military personnel. Thus, we carried out test and retest of skinfold (SKF), single frequency bioelectrical impedance analysis (SF-BIA), and multifrequency bioelectrical impedance analysis measurements in 65 male and female soldiers. Several validated equations were used to predict percent body fat from these methods. Dual-energy X-ray absorptiometry was also measured, and acted as the criterion method. Results showed that SF-BIA was the most reliable method in both genders. In women, SF-BIA was also the most valid method, whereas SKF or a combination of SKF and SF-BIA produced the highest validity in men. Reliability and validity varied substantially among the equations examined. The best methods and equations produced test-retest 95% limits of agreement below ±1% points, whereas the corresponding validity figures were ±3.5% points. Each investigator and practitioner must consider whether such measurement errors are acceptable for its specific use. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton
2011-01-01
A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.
Dehouck, P; Vander Heyden, Y; Smeyers-Verbeke, J; Massart, D L; Marini, R D; Chiap, P; Hubert, Ph; Crommen, J; Van de Wauw, W; De Beer, J; Cox, R; Mathieu, G; Reepmeyer, J C; Voigt, B; Estevenon, O; Nicolas, A; Van Schepdael, A; Adams, E; Hoogmartens, J
2003-08-22
Erythromycin is a mixture of macrolide antibiotics produced by Saccharopolyspora erythreas during fermentation. A new method for the analysis of erythromycin by liquid chromatography has previously been developed. It makes use of an Astec C18 polymeric column. After validation in one laboratory, the method was now validated in an interlaboratory study. Validation studies are commonly used to test the fitness of the analytical method prior to its use for routine quality testing. The data derived in the interlaboratory study can be used to make an uncertainty statement as well. The relationship between validation and uncertainty statement is not clear for many analysts and there is a need to show how the existing data, derived during validation, can be used in practice. Eight laboratories participated in this interlaboratory study. The set-up allowed the determination of the repeatability variance, s(2)r and the between-laboratory variance, s(2)L. Combination of s(2)r and s(2)L results in the reproducibility variance s(2)R. It has been shown how these data can be used in future by a single laboratory that wants to make an uncertainty statement concerning the same analysis.
ERIC Educational Resources Information Center
Dirlikov, Benjamin; Younes, Laurent; Nebel, Mary Beth; Martinelli, Mary Katherine; Tiedemann, Alyssa Nicole; Koch, Carolyn A.; Fiorilli, Diana; Bastian, Amy J.; Denckla, Martha Bridge; Miller, Michael I.; Mostofsky, Stewart H.
2017-01-01
This study presents construct validity for a novel automated morphometric and kinematic handwriting assessment, including (1) convergent validity, establishing reliability of automated measures with traditional manual-derived Minnesota Handwriting Assessment (MHA), and (2) discriminant validity, establishing that the automated methods distinguish…
Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya
2018-03-08
A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.
Increased efficacy for in-house validation of real-time PCR GMO detection methods.
Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H
2010-03-01
To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.
Nonclinical dose formulation analysis method validation and sample analysis.
Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D
2010-12-01
Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.
Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J
2017-11-01
Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Do placebo based validation standards mimic real batch products behaviour? Case studies.
Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E
2011-06-01
Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.
Hickey, Graeme L; Blackstone, Eugene H
2016-08-01
Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Vu, A; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less
Wang, Zhenlei; Jiang, Ji; Hu, Pei; Zhao, Qian
2017-02-01
Fotagliptin is a novel dipeptidyl peptidase IV inhibitor under clinical development for the treatment of Type II diabetes mellitus. The objective of this study was to develop and validate a specific and sensitive ultra-performance liquid chromatography (UPLC)-MS/MS method for simultaneous determination of fotagliptin and its two major metabolites in human plasma and urine. Methodology & results: After being pretreated using an automatized procedure, the plasma and urine samples were separated and detected using a UPLC-ESI-MS/MS method, which was validated following the international guidelines. A selective and sensitive UPLC-MS/MS method was first developed and validated for quantifying fotagliptin and its metabolite in human plasma and urine. The method was successfully applied to support the clinical study of fotagliptin in Chinese healthy subjects.
Majumdar, Subhabrata; Basak, Subhash C
2018-04-26
Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Validity of a digital diet estimation method for use with preschool children
USDA-ARS?s Scientific Manuscript database
The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...
Validating Accelerometry and Skinfold Measures in Youth with Down Syndrome
ERIC Educational Resources Information Center
Esposito, Phil Michael
2012-01-01
Current methods for measuring quantity and intensity of physical activity based on accelerometer output have been studied and validated in youth. These methods have been applied to youth with Down syndrome (DS) with no empirical research done to validate these measures. Similarly, individuals with DS have unique body proportions not represented by…
Analyzing the Validity of the Adult-Adolescent Parenting Inventory for Low-Income Populations
ERIC Educational Resources Information Center
Lawson, Michael A.; Alameda-Lawson, Tania; Byrnes, Edward
2017-01-01
Objectives: The purpose of this study was to examine the construct and predictive validity of the Adult-Adolescent Parenting Inventory (AAPI-2). Methods: The validity of the AAPI-2 was evaluated using multiple statistical methods, including exploratory factor analysis, confirmatory factor analysis, and latent class analysis. These analyses were…
History and development of the Schmidt-Hunter meta-analysis methods.
Schmidt, Frank L
2015-09-01
In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.
In-vitro Equilibrium Phosphate Binding Study of Sevelamer Carbonate by UV-Vis Spectrophotometry.
Prasaja, Budi; Syabani, M Maulana; Sari, Endah; Chilmi, Uci; Cahyaningsih, Prawitasari; Kosasih, Theresia Weliana
2018-06-12
Sevelamer carbonate is a cross-linked polymeric amine; it is the active ingredient in Renvela ® tablets. US FDA provides recommendation for demonstrating bioequivalence for the development of a generic product of sevelamer carbonte using in-vitro equilibrium binding study. A simple UV-vis spectrophotometry method was developed and validated for quantification of free phosphate to determine the binding parameter constant of sevelamer. The method validation demonstrated the specificity, limit of quantification, accuracy and precision of measurements. The validated method has been successfully used to analyze samples in in-vitro equilibrium binding study for demonstrating bioequivalence. © Georg Thieme Verlag KG Stuttgart · New York.
Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P
2017-12-01
The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf
2017-01-01
Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. PMID:28827239
ERIC Educational Resources Information Center
Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla K.; Vaughn, Sharon; Tolar, Tammy D.
2014-01-01
Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Cognitive assessment…
Content validity across methods of malnutrition assessment in patients with cancer is limited.
Sealy, Martine J; Nijholt, Willemke; Stuiver, Martijn M; van der Berg, Marit M; Roodenburg, Jan L N; van der Schans, Cees P; Ottery, Faith D; Jager-Wittenaar, Harriët
2016-08-01
To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Systematic review of studies in cancer patients that operationalized malnutrition as a variable, published since 1998. Eleven key concepts, within the three domains reflected by the malnutrition definitions acknowledged by European Society for Clinical Nutrition and Metabolism (ESPEN) and the American Society for Parenteral and Enteral Nutrition (ASPEN): A: nutrient balance; B: changes in body shape, body area and body composition; and C: function, were used to classify content validity of methods to assess malnutrition. Content validity indices (M-CVIA-C) were calculated per assessment method. Acceptable content validity was defined as M-CVIA-C ≥ 0.80. Thirty-seven assessment methods were identified in the 160 included articles. Mini Nutritional Assessment (M-CVIA-C = 0.72), Scored Patient-Generated Subjective Global Assessment (M-CVIA-C = 0.61), and Subjective Global Assessment (M-CVIA-C = 0.53) scored highest M-CVIA-C. A large number of malnutrition assessment methods are used in cancer research. Content validity of these methods varies widely. None of these assessment methods has acceptable content validity, when compared against a construct based on ESPEN and ASPEN definitions of malnutrition. Copyright © 2016 Elsevier Inc. All rights reserved.
Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.
2011-01-01
AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028
Validation of Skills, Knowledge and Experience in Lifelong Learning in Europe
ERIC Educational Resources Information Center
Ogunleye, James
2012-01-01
The paper examines systems of validation of skills and experience as well as the main methods/tools currently used for validating skills and knowledge in lifelong learning. The paper uses mixed methods--a case study research and content analysis of European Union policy documents and frameworks--as a basis for this research. The selection of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less
Dynamic Time Warping compared to established methods for validation of musculoskeletal models.
Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael
2017-04-11
By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Mousazadeh, Somayeh; Rakhshan, Mahnaz; Mohammadi, Fateme
2017-01-01
Objective: This study aimed to determine the psychometric properties of sociocultural attitude towards appearance questionnaire in female adolescents. Method: This was a methodological study. The English version of the questionnaire was translated into Persian, using forward-backward method. Then the face validity, content validity and reliability were checked. To ensure face validity, the questionnaire was given to 25 female adolescents, a psychologist and three nurses, who were required to evaluate the items with respect to problems, ambiguity, relativity, proper terms and grammar, and understandability. For content validity, 15 experts in psychology and nursing, who met the inclusion criteria, were required. They were asked to assess the qualitative of content validity. To determine the quantitative content validity, content validity index and content validity ratio were calculated. At the end, internal consistency of the items was assessed, using Cronbach’s alpha method. Results: According to the expert judgments, content validity ratio was 0.81 and content validity index was 0.91. Besides, the reliability of the questionnaire was confirmed with Cronbach’s alpha = 0.91, and physical and developmental areas showed the highest reliability indices. Conclusion: The aforementioned questionnaire could be used in researches to assess female adolescents’ self-concept. This can be a stepping-stone towards identification of problems and improvement of adolescents’ body image. PMID:28496497
Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs
2016-09-27
No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Tozzoli, Rosangela; Maugliani, Antonella; Michelacci, Valeria; Minelli, Fabio; Caprioli, Alfredo; Morabito, Stefano
2018-05-08
In 2006, the European Committee for standardisation (CEN)/Technical Committee 275 - Food analysis - Horizontal methods/Working Group 6 - Microbiology of the food chain (TC275/WG6), launched the project of validating the method ISO 16654:2001 for the detection of Escherichia coli O157 in foodstuff by the evaluation of its performance, in terms of sensitivity and specificity, through collaborative studies. Previously, a validation study had been conducted to assess the performance of the Method No 164 developed by the Nordic Committee for Food Analysis (NMKL), which aims at detecting E. coli O157 in food as well, and is based on a procedure equivalent to that of the ISO 16654:2001 standard. Therefore, CEN established that the validation data obtained for the NMKL Method 164 could be exploited for the ISO 16654:2001 validation project, integrated with new data obtained through two additional interlaboratory studies on milk and sprouts, run in the framework of the CEN mandate No. M381. The ISO 16654:2001 validation project was led by the European Union Reference Laboratory for Escherichia coli including VTEC (EURL-VTEC), which organized the collaborative validation study on milk in 2012 with 15 participating laboratories and that on sprouts in 2014, with 14 participating laboratories. In both studies, a total of 24 samples were tested by each laboratory. Test materials were spiked with different concentration of E. coli O157 and the 24 samples corresponded to eight replicates of three levels of contamination: zero, low and high spiking level. The results submitted by the participating laboratories were analyzed to evaluate the sensitivity and specificity of the ISO 16654:2001 method when applied to milk and sprouts. The performance characteristics calculated on the data of the collaborative validation studies run under the CEN mandate No. M381 returned sensitivity and specificity of 100% and 94.4%, respectively for the milk study. As for sprouts matrix, the sensitivity resulted in 75.9% in the low level of contamination samples and 96.4% in samples spiked with high level of E. coli O157 and specificity was calculated as 99.1%. Copyright © 2018 Elsevier B.V. All rights reserved.
Bellur Atici, Esen; Yazar, Yücel; Ağtaş, Çağan; Ridvanoğlu, Nurten; Karlığa, Bekir
2017-03-20
Antibacterial combinations consisting of the semisynthetic antibiotic amoxicillin (amox) and the β-lactamase inhibitor potassium clavulanate (clav) are commonly used and several chromatographic methods were reported for their quantification in mixtures. In the present work, single HPLC method for related substances analyses of amoxicillin and potassium clavulanate mixtures was developed and validated according to international conference on harmonization (ICH) guidelines. Eighteen amoxicillin and six potassium clavulanate impurities were successfully separated from each other by using triple gradient elution using a Thermo Hypersil Zorbax BDS C18 (250 mm×4.6mm, 3μm) column with 50μL injection volumes at a wavelength of 215nm. Commercially unavailable impurities were formed by degradation of amoxicillin and potassium clavulanate, identified by LC-MS studies and used during analytical method development and validation studies. Also, process related amoxicillin impurity-P was synthesized and characterized by using nuclear magnetic resonance (NMR) and mass spectroscopy (MS) for the first time. As complementary of this work, an assay method for amoxicillin and potassium clavulanate mixtures was developed and validated; stress-testing and stability studies of amox/clav mixtures was carried out under specified conditions according to ICH and analyzed by using validated stability-indicating assay and related substances methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf
2017-08-21
The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam
2016-05-01
People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.
Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Vincent, Charles; van Gurp, Petra J; de Vet, Henrica C W; Wollersheim, Hub
2016-01-01
Objectives Record review is the most used method to quantify patient safety. We systematically reviewed the reliability and validity of adverse event detection with record review. Design A systematic review of the literature. Methods We searched PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Library and from their inception through February 2015. We included all studies that aimed to describe the reliability and/or validity of record review. Two reviewers conducted data extraction. We pooled κ values (κ) and analysed the differences in subgroups according to number of reviewers, reviewer experience and training level, adjusted for the prevalence of adverse events. Results In 25 studies, the psychometric data of the Global Trigger Tool (GTT) and the Harvard Medical Practice Study (HMPS) were reported and 24 studies were included for statistical pooling. The inter-rater reliability of the GTT and HMPS showed a pooled κ of 0.65 and 0.55, respectively. The inter-rater agreement was statistically significantly higher when the group of reviewers within a study consisted of a maximum five reviewers. We found no studies reporting on the validity of the GTT and HMPS. Conclusions The reliability of record review is moderate to substantial and improved when a small group of reviewers carried out record review. The validity of the record review method has never been evaluated, while clinical data registries, autopsy or direct observations of patient care are potential reference methods that can be used to test concurrent validity. PMID:27550650
Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel
2013-01-01
Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187
A Critical Review of Methods to Evaluate the Impact of FDA Regulatory Actions
Briesacher, Becky A.; Soumerai, Stephen B.; Zhang, Fang; Toh, Sengwee; Andrade, Susan E.; Wagner, Joann L.; Shoaibi, Azadeh; Gurwitz, Jerry H.
2013-01-01
Purpose To conduct a synthesis of the literature on methods to evaluate the impacts of FDA regulatory actions, and identify best practices for future evaluations. Methods We searched MEDLINE for manuscripts published between January 1948 and August 2011 that included terms related to FDA, regulatory actions, and empirical evaluation; the review additionally included FDA-identified literature. We used a modified Delphi method to identify preferred methodologies. We included studies with explicit methods to address threats to validity, and identified designs and analytic methods with strong internal validity that have been applied to other policy evaluations. Results We included 18 studies out of 243 abstracts and papers screened. Overall, analytic rigor in prior evaluations of FDA regulatory actions varied considerably; less than a quarter of studies (22%) included control groups. Only 56% assessed changes in the use of substitute products/services, and 11% examined patient health outcomes. Among studies meeting minimal criteria of rigor, 50% found no impact or weak/modest impacts of FDA actions and 33% detected unintended consequences. Among those studies finding significant intended effects of FDA actions, all cited the importance of intensive communication efforts. There are preferred methods with strong internal validity that have yet to be applied to evaluations of FDA regulatory actions. Conclusions Rigorous evaluations of the impact of FDA regulatory actions have been limited and infrequent. Several methods with strong internal validity are available to improve trustworthiness of future evaluations of FDA policies. PMID:23847020
Consensus methods: review of original methods and their main alternatives used in public health
Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid
2008-01-01
Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
Integrating cell biology and proteomic approaches in plants.
Takáč, Tomáš; Šamajová, Olga; Šamaj, Jozef
2017-10-03
Significant improvements of protein extraction, separation, mass spectrometry and bioinformatics nurtured advancements of proteomics during the past years. The usefulness of proteomics in the investigation of biological problems can be enhanced by integration with other experimental methods from cell biology, genetics, biochemistry, pharmacology, molecular biology and other omics approaches including transcriptomics and metabolomics. This review aims to summarize current trends integrating cell biology and proteomics in plant science. Cell biology approaches are most frequently used in proteomic studies investigating subcellular and developmental proteomes, however, they were also employed in proteomic studies exploring abiotic and biotic stress responses, vesicular transport, cytoskeleton and protein posttranslational modifications. They are used either for detailed cellular or ultrastructural characterization of the object subjected to proteomic study, validation of proteomic results or to expand proteomic data. In this respect, a broad spectrum of methods is employed to support proteomic studies including ultrastructural electron microscopy studies, histochemical staining, immunochemical localization, in vivo imaging of fluorescently tagged proteins and visualization of protein-protein interactions. Thus, cell biological observations on fixed or living cell compartments, cells, tissues and organs are feasible, and in some cases fundamental for the validation and complementation of proteomic data. Validation of proteomic data by independent experimental methods requires development of new complementary approaches. Benefits of cell biology methods and techniques are not sufficiently highlighted in current proteomic studies. This encouraged us to review most popular cell biology methods used in proteomic studies and to evaluate their relevance and potential for proteomic data validation and enrichment of purely proteomic analyses. We also provide examples of representative studies combining proteomic and cell biology methods for various purposes. Integrating cell biology approaches with proteomic ones allow validation and better interpretation of proteomic data. Moreover, cell biology methods remarkably extend the knowledge provided by proteomic studies and might be fundamental for the functional complementation of proteomic data. This review article summarizes current literature linking proteomics with cell biology. Copyright © 2017 Elsevier B.V. All rights reserved.
English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.
2014-01-01
Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084
Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J
2017-02-01
The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing. However, this method does not take into account the quality of the suture. Thus, future works will focus on developing new methods combining motion analysis and qualitative outcome evaluation to provide a complete performance assessment to trainees.
Cross-validation of the Beunen-Malina method to predict adult height.
Beunen, Gaston P; Malina, Robert M; Freitas, Duarte I; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Lefevre, Johan
2010-08-01
The purpose of this study was to cross-validate the Beunen-Malina method for non-invasive prediction of adult height. Three hundred and eight boys aged 13, 14, 15 and 16 years from the Madeira Growth Study were observed at annual intervals in 1996, 1997 and 1998 and re-measured 7-8 years later. Height, sitting height and the triceps and subscapular skinfolds were measured; skeletal age was assessed using the Tanner-Whitehouse 2 method. Adult height was measured and predicted using the Beunen-Malina method. Maturity groups were classified using relative skeletal age (skeletal age minus chronological age). Pearson correlations, mean differences and standard errors of estimate (SEE) were calculated. Age-specific correlations between predicted and measured adult height vary between 0.70 and 0.85, while age-specific SEE varies between 3.3 and 4.7 cm. The correlations and SEE are similar to those obtained in the development of the original Beunen-Malina method. The Beunen-Malina method is a valid method to predict adult height in adolescent boys and can be used in European populations or populations from European ancestry. Percentage of predicted adult height is a non-invasive valid method to assess biological maturity.
USDA-ARS?s Scientific Manuscript database
The aim of this study was to determine the validity of energy intake (EI) estimations made using the remote food photography method (RFPM) compared to the doubly labeled water (DLW) method in minority preschool children in a free-living environment. Seven days of food intake and spot urine samples...
Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs
ERIC Educational Resources Information Center
Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.
2005-01-01
The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…
Analysis of Phosphonic Acids: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Vu, A; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled Analysis of Diisopropyl Methylphosphonate, Ethyl Hydrogen Dimethylamidophosphate, Isopropyl Methylphosphonic Acid, Methylphosphonic Acid, and Pinacolyl Methylphosphonic Acid in Water by Multiple Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry: EPA Version MS999. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures describedmore » in EPA Method MS999 for analysis of the listed phosphonic acids and surrogates in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of EPA Method MS999 can be determined.« less
Martins, Danielly da Fonte Carvalho; Florindo, Lorena Coimbra; Machado, Anna Karolina Mouzer da Silva; Todeschini, Vítor; Sangoi, Maximiliano da Silva
2017-11-01
This study presents the development and validation of UV spectrophotometric methods for the determination of pinaverium bromide (PB) in tablet assay and dissolution studies. The methods were satisfactorily validated according to International Conference on Harmonization guidelines. The response was linear (r2 > 0.99) in the concentration ranges of 2-14 μg/mL at 213 nm and 10-70 μg/mL at 243 nm. The LOD and LOQ were 0.39 and 1.31 μg/mL, respectively, at 213 nm. For the 243 nm method, the LOD and LOQ were 2.93 and 9.77 μg/mL, respectively. Precision was evaluated by RSD, and the obtained results were lower than 2%. Adequate accuracy was also obtained. The methods proved to be robust using a full factorial design evaluation. For PB dissolution studies, the best conditions were achieved using a United States Pharmacopeia Dissolution Apparatus 2 (paddle) at 50 rpm and with 900 mL 0.1 M hydrochloric acid as the dissolution medium, presenting satisfactory results during the validation tests. In addition, the kinetic parameters of drug release were investigated using model-dependent methods, and the dissolution profiles were best described by the first-order model. Therefore, the proposed methods were successfully applied for the assay and dissolution analysis of PB in commercial tablets.
DBS-LC-MS/MS assay for caffeine: validation and neonatal application.
Bruschettini, Matteo; Barco, Sebastiano; Romantsik, Olga; Risso, Francesco; Gennai, Iulian; Chinea, Benito; Ramenghi, Luca A; Tripodi, Gino; Cangemi, Giuliana
2016-09-01
DBS might be an appropriate microsampling technique for therapeutic drug monitoring of caffeine in infants. Nevertheless, its application presents several issues that still limit its use. This paper describes a validated DBS-LC-MS/MS method for caffeine. The results of the method validation showed an hematocrit dependence. In the analysis of 96 paired plasma and DBS clinical samples, caffeine levels measured in DBS were statistically significantly lower than in plasma but the observed differences were independent from hematocrit. These results clearly showed the need for extensive validation with real-life samples for DBS-based methods. DBS-LC-MS/MS can be considered to be a good alternative to traditional methods for therapeutic drug monitoring or PK studies in preterm infants.
Validation of verbal autopsy methods using hospital medical records: a case study in Vietnam.
Tran, Hong Thi; Nguyen, Hoa Phuong; Walker, Sue M; Hill, Peter S; Rao, Chalapati
2018-05-18
Information on causes of death (COD) is crucial for measuring the health outcomes of populations and progress towards the Sustainable Development Goals. In many countries such as Vietnam where the civil registration and vital statistics (CRVS) system is dysfunctional, information on vital events will continue to rely on verbal autopsy (VA) methods. This study assesses the validity of VA methods used in Vietnam, and provides recommendations on methods for implementing VA validation studies in Vietnam. This validation study was conducted on a sample of 670 deaths from a recent VA study in Quang Ninh province. The study covered 116 cases from this sample, which met three inclusion criteria: a) the death occurred within 30 days of discharge after last hospitalisation, and b) medical records (MRs) for the deceased were available from respective hospitals, and c) the medical record mentioned that the patient was terminally ill at discharge. For each death, the underlying cause of death (UCOD) identified from MRs was compared to the UCOD from VA. The validity of VA diagnoses for major causes of death was measured using sensitivity, specificity and positive predictive value (PPV). The sensitivity of VA was at least 75% in identifying some leading CODs such as stroke, road traffic accidents and several site-specific cancers. However, sensitivity was less than 50% for other important causes including ischemic heart disease, chronic obstructive pulmonary diseases, and diabetes. Overall, there was 57% agreement between UCOD from VA and MR, which increased to 76% when multiple causes from VA were compared to UCOD from MR. Our findings suggest that VA is a valid method to ascertain UCOD in contexts such as Vietnam. Furthermore, within cultural contexts in which patients prefer to die at home instead of a healthcare facility, using the available MRs as the gold standard may be meaningful to the extent that recall bias from the interval between last hospital discharge and death can be minimized. Therefore, future studies should evaluate validity of MRs as a gold standard for VA studies in contexts similar to the Vietnamese context.
Display format, highlight validity, and highlight method: Their effects on search performance
NASA Technical Reports Server (NTRS)
Donner, Kimberly A.; Mckay, Tim D.; Obrien, Kevin M.; Rudisill, Marianne
1991-01-01
Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. Responses to valid applications of highlight were significantly faster than responses to non or invalidly highlighted applications. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. A separate within-subject analysis of variance of display format, highlight validity, and several highlight methods did not reveal a main effect of highlight method. In addition, observed display search times were compared to search time predicted by Tullis' Display Analysis Program. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.
Orsi, Rebecca
2017-02-01
Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Kimball, Steven M.; Milanowski, Anthony
2009-01-01
Purpose: The article reports on a study of school leader decision making that examined variation in the validity of teacher evaluation ratings in a school district that has implemented a standards-based teacher evaluation system. Research Methods: Applying mixed methods, the study used teacher evaluation ratings and value-added student achievement…
NASA Astrophysics Data System (ADS)
Wang, Zengwei; Zhu, Ping; Liu, Zhao
2018-01-01
A generalized method for predicting the decoupled transfer functions based on in-situ transfer functions is proposed. The method allows predicting the decoupled transfer functions using coupled transfer functions, without disassembling the system. Two ways to derive relationships between the decoupled and coupled transfer functions are presented. Issues related to immeasurability of coupled transfer functions are also discussed. The proposed method is validated by numerical and experimental case studies.
External validation of a Cox prognostic model: principles and methods
2013-01-01
Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Validation of GC and HPLC systems for residue studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, M.
1995-12-01
For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less
Bell, Cheryl; Johnston, Derek; Allan, Julia; Pollard, Beth; Johnston, Marie
2017-05-01
The Demand-Control (DC) and Effort-Reward Imbalance (ERI) models predict health in a work context. Self-report measures of the four key constructs (demand, control, effort, and reward) have been developed and it is important that these measures have good content validity uncontaminated by content from other constructs. We assessed relevance (whether items reflect the constructs) and representativeness (whether all aspects of the construct are assessed, and all items contribute to that assessment) across the instruments and items. Two studies examined fourteen demand/control items from the Job Content Questionnaire and seventeen effort/reward items from the Effort-Reward Imbalance measure using discriminant content validation and a third study developed new methods to assess instrument representativeness. Both methods use judges' ratings and construct definitions to get transparent quantitative estimates of construct validity. Study 1 used dictionary definitions while studies 2 and 3 used published phrases to define constructs. Overall, 3/5 demand items, 4/9 control items, 1/6 effort items, and 7/11 reward items were uniquely classified to the appropriate theoretical construct and were therefore 'pure' items with discriminant content validity (DCV). All pure items measured a defining phrase. However, both the DC and ERI assessment instruments failed to assess all defining aspects. Finding good discriminant content validity for demand and reward measures means these measures are usable and our quantitative results can guide item selection. By contrast, effort and control measures had limitations (in relevance and representativeness) presenting a challenge to the implementation of the theories. Statement of contribution What is already known on this subject? While the reliability and construct validity of Demand-Control and Effort-Reward-Imbalance (DC and ERI) work stress measures are routinely reported, there has not been adequate investigation of their content validity. This paper investigates their content validity in terms of both relevance and representativeness and provides a model for the investigation of content validity of measures in health psychology more generally. What does this study add? A new application of an existing method, discriminant content validity, and a new method of assessing instrument representativeness. 'Pure' DC and ERI items are identified, as are constructs that are not fully represented by their assessment instruments. The findings are important for studies attempting to distinguish between the main DC and ERI work stress constructs. The quantitative results can be used to guide item selection for future studies. © 2017 The British Psychological Society.
Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated
ERIC Educational Resources Information Center
Morell, Linda; Tan, Rachael Jin Bee
2009-01-01
Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…
Adolescent Domain Screening Inventory-Short Form: Development and Initial Validation
ERIC Educational Resources Information Center
Corrigan, Matthew J.
2017-01-01
This study sought to develop a short version of the ADSI, and investigate its psychometric properties. Methods: This is a secondary analysis. Analysis to determine the Cronbach's Alpha, correlations to determine concurrent criterion validity and known instrument validity and a logistic regression to determine predictive validity were conducted.…
Initial Reliability and Validity of the Perceived Social Competence Scale
ERIC Educational Resources Information Center
Anderson-Butcher, Dawn; Iachini, Aidyn L.; Amorose, Anthony J.
2008-01-01
Objective: This study describes the development and validation of a perceived social competence scale that social workers can easily use to assess children's and youth's social competence. Method: Exploratory and confirmatory factor analyses were conducted on a calibration and a cross-validation sample of youth. Predictive validity was also…
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Validity of a Measure of Assertiveness
ERIC Educational Resources Information Center
Galassi, John P.; Galassi, Merna D.
1974-01-01
This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
Validation of the ULCEAT methodology by applying it in retrospect to the Roboticbed.
Nakamura, Mio; Suzurikawa, Jun; Tsukada, Shohei; Kume, Yohei; Kawakami, Hideo; Inoue, Kaoru; Inoue, Takenobu
2015-01-01
In answer to the increasing demand for care by the Japanese oldest portion of the population, an extensive programme of life support robots is under development, advocated by the Japanese government. Roboticbed® (RB) is developed to facilitate patients in their daily life in making independent transfers from and to the bed. The bed is intended both for elderly and persons with a disability. The purpose of this study is to examine the validity of the user and user's life centred clinical evaluation of assistive technology (ULCEAT) methodology. To support user centred development of life support robots the ULCEAT method was developed. By means of the ULCEAT method the target users and the use environment were re-established in an earlier study. The validity of the method is tested by re-evaluating the development of RB in retrospect. Six participants used the first prototype of RB (RB1) and eight participants used the second prototype of RB (RB2). The results indicated that the functionality was improved owing to the end-user evaluations. Therefore, we confirmed the content validity of the proposed ULCEAT method. In this study we confirmed the validation of the ULCEAT methodology by applying it in retrospect to RB using development process. This method will be used for the development of Life-support robots and prototype assistive technologies.
Ocque, Andrew J; Hagler, Colleen E; Difrancesco, Robin; Woolwine-Cunningham, Yvonne; Bednasz, Cindy J; Morse, Gene D; Talal, Andrew H
2016-07-01
Determination of paritaprevir and ritonavir in rat liver tissue samples. We successfully validated a UPLC-MS/MS method to measure paritaprevir and ritonavir in rat liver using deuterated internal standards (d8-paritapervir and d6-ritonavir). The method is linear from 20 to 20,000 and 5 to 10,000 pg on column for paritaprevir and ritonavir, respectively, and is normalized per milligram tissue. Interday and intraday variability ranged from 0.591 to 5.33% and accuracy ranged from -6.68 to 10.1% for quality control samples. The method was then applied to the measurement of paritaprevir and ritonavir in rat liver tissue samples from a pilot study. The validated method is suitable for measurement of paritaprevir and ritonavir within rat liver tissue samples for PK studies.
Rajan, Sekar; Colaco, Socorrina; Ramesh, N; Meyyanathan, Subramania Nainar; Elango, K
2014-02-01
This study describes the development and validation of dissolution tests for sustained release Dextromethorphan hydrobromide tablets using an HPLC method. Chromatographic separation was achieved on a C18 column utilizing 0.5% triethylamine (pH 7.5) and acetonitrile in the ratio of 50:50. The detection wavelength was 280 nm. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The suitable conditions were clearly decided after testing sink conditions, dissolution medium and agitation intensity. The most excellent dissolution conditions tested, for the Dextromethorphan hydrobromide was applied to appraise the dissolution profiles. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The method was established to have sufficient intermediate precision as similar separation was achieved on another instrument handled by different operators. Mean Recovery was 101.82%. Intra precisions for three different concentrations were 1.23, 1.10 0.72 and 1.57, 1.69, 0.95 and inter run precisions were % RSD 0.83, 1.36 and 1.57%, respectively. The method was successfully applied for dissolution study of the developed Dextromethorphan hydrobromide tablets.
Verification and Validation Studies for the LAVA CFD Solver
NASA Technical Reports Server (NTRS)
Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.
2013-01-01
The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.
ERIC Educational Resources Information Center
Zhang, Yi
2016-01-01
Objective: Guided by validation theory, this study aims to better understand the role that academic advising plays in international community college students' adjustment. More specifically, this study investigated how academic advising validates or invalidates their academic and social experiences in a community college context. Method: This…
ERIC Educational Resources Information Center
Demiray, Esra; Isiksal Bostan, Mine
2017-01-01
The purposes of this study are to investigate Turkish pre-service middle school mathematics teachers' ability in conducting valid proofs for statements regarding numbers and algebra in terms of their year of enrollment in a teacher education program, to determine the proof methods used in their valid proofs, and to examine the reasons for their…
ERIC Educational Resources Information Center
Lung, For-Wey; Chiang, Tung-Liang; Lin, Shio-Jean; Feng, Jui-Ying; Chen, Po-Fei; Shu, Bih-Ching
2011-01-01
The parental report instrument is the most efficient developmental detection method and has shown high validity with professional assessment instruments. The reliability and validity of the Taiwan Birth Cohort Study (TBCS) 6-, 18- and 36-month scales have already been established. In this study, the reliability and validity of the 60-month scale…
Development of Creative Behavior Observation Form: A Study on Validity and Reliability
ERIC Educational Resources Information Center
Dere, Zeynep; Ömeroglu, Esra
2018-01-01
This study, Creative Behavior Observation Form was developed to assess creativity of the children. While the study group on the reliability and validity of Creative Behavior Observation Form was being developed, 257 children in total who were at the ages of 5-6 were used as samples with stratified sampling method. Content Validity Index (CVI) and…
Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire
Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra
2018-01-01
Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain. PMID:29843496
Vasak, Christoph; Strbac, Georg D; Huber, Christian D; Lettner, Stefan; Gahleitner, André; Zechner, Werner
2015-02-01
The study aims to evaluate the accuracy of the NobelGuide™ (Medicim/Nobel Biocare, Göteborg, Sweden) concept maximally reducing the influence of clinical and surgical parameters. Moreover, the study was to compare and validate two validation procedures versus a reference method. Overall, 60 implants were placed in 10 artificial edentulous mandibles according to the NobelGuide™ protocol. For merging the pre- and postoperative DICOM data sets, three different fusion methods (Triple Scan Technique, NobelGuide™ Validation software, and AMIRA® software [VSG - Visualization Sciences Group, Burlington, MA, USA] as reference) were applied. Discrepancies between the virtual and the actual implant positions were measured. The mean deviations measured with AMIRA® were 0.49 mm (implant shoulder), 0.69 mm (implant apex), and 1.98°mm (implant axis). The Triple Scan Technique as well as the NobelGuide™ Validation software revealed similar deviations compared with the reference method. A significant correlation between angular and apical deviations was seen (r = 0.53; p < .001). A greater implant diameter was associated with greater deviations (p = .03). The Triple Scan Technique as a system-independent validation procedure as well as the NobelGuide™ Validation software are in accordance with the AMIRA® software. The NobelGuide™ system showed similar or less spatial and angular deviations compared with others. © 2013 Wiley Periodicals, Inc.
2017-01-01
Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014). Conclusions This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. PMID:28292738
Leporace, Gustavo; Batista, Luiz Alberto; Serra Cruz, Raphael; Zeitoune, Gabriel; Cavalin, Gabriel Armondi; Metsavaht, Leonardo
2018-03-01
The purpose of this study was to test the validity of dynamic leg length discrepancy (DLLD) during gait as a radiation-free screening method for measuring anatomic leg length discrepancy (ALLD). Thirty-three subjects with mild leg length discrepancy walked along a walkway and the dynamic leg length discrepancy (DLLD) was calculated using a motion analysis system. Pearson correlation and paired Student t -tests were applied to calculate the correlation and compare the differences between DLLD and ALLD (α = 0.05). The results of our study showed DLLD is not a valid method to predict ALLD in subjects with mild limb discrepancy.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L
2013-12-30
To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara
2010-05-01
There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Proposal of a method for the evaluation of inaccuracy of home sphygmomanometers.
Akpolat, Tekin
2009-10-01
There is no formal protocol for evaluating the individual accuracy of home sphygmomanometers. The aims of this study were to propose a method for achieving accuracy in automated home sphygmomanometers and to test the applicability of the defined method. The purposes of this method were to avoid major inaccuracies and to estimate the optimal circumstance for individual accuracy. The method has three stages and sequential measurement of blood pressure is used. The tested devices were categorized into four groups: accurate, acceptable, inaccurate and very inaccurate (major inaccuracy). The defined method takes approximately 10 min (excluding relaxation time) and was tested on three different occasions. The application of the method has shown that inaccuracy is a common problem among non-tested devices, that validated devices are superior to those that are non-validated or whose validation status is unknown, that major inaccuracy is common, especially in non-tested devices and that validation does not guarantee individual accuracy. A protocol addressing the accuracy of a particular sphygmomanometer in an individual patient is required, and a practical method has been suggested to achieve this. This method can be modified, but the main idea and approach should be preserved unless a better method is proposed. The purchase of validated devices and evaluation of accuracy for the purchased device in an individual patient will improve the monitoring of self-measurement of blood pressure at home. This study addresses device inaccuracy, but errors related to the patient, observer or blood pressure measurement technique should not be underestimated, and strict adherence to the manufacturer's instructions is essential.
Determination of Ethanol in Kombucha Products: Single-Laboratory Validation, First Action 2016.12.
Ebersole, Blake; Liu, Ying; Schmidt, Rich; Eckert, Matt; Brown, Paula N
2017-05-01
Kombucha is a fermented nonalcoholic beverage that has drawn government attention due to the possible presence of excess ethanol (≥0.5% alcohol by volume; ABV). A validated method that provides better precision and accuracy for measuring ethanol levels in kombucha is urgently needed by the kombucha industry. The current study validated a method for determining ethanol content in commercial kombucha products. The ethanol content in kombucha was measured using headspace GC with flame ionization detection. An ethanol standard curve ranging from 0.05 to 5.09% ABV was used, with correlation coefficients greater than 99.9%. The method detection limit was 0.003% ABV and the LOQ was 0.01% ABV. The RSDr ranged from 1.62 to 2.21% and the Horwitz ratio ranged from 0.4 to 0.6. The average accuracy of the method was 98.2%. This method was validated following the guidelines for single-laboratory validation by AOAC INTERNATIONAL and meets the requirements set by AOAC SMPR 2016.001, "Standard Method Performance Requirements for Determination of Ethanol in Kombucha."
Validation of a Russian Language Oswestry Disability Index Questionnaire.
Yu, Elizabeth M; Nosova, Emily V; Falkenstein, Yuri; Prasad, Priya; Leasure, Jeremi M; Kondrashov, Dimitriy G
2016-11-01
Study Design Retrospective reliability and validity study. Objective To validate a recently translated Russian language version of the Oswestry Disability Index (R-ODI) using standardized methods detailed from previous validations in other languages. Methods We included all subjects who were seen in our spine surgery clinic, over the age of 18, and fluent in the Russian language. R-ODI was translated by six bilingual people and combined into a consensus version. R-ODI and visual analog scale (VAS) questionnaires for leg and back pain were distributed to subjects during both their initial and follow-up visits. Test validity, stability, and internal consistency were measured using standardized psychometric methods. Results Ninety-seven subjects participated in the study. No change in the meaning of the questions on R-ODI was noted with translation from English to Russian. There was a significant positive correlation between R-ODI and VAS scores for both the leg and back during both the initial and follow-up visits ( p < 0.01 for all). The instrument was shown to have high internal consistency (Cronbach α = 0.82) and moderate test-retest stability (interclass correlation coefficient = 0.70). Conclusions The R-ODI is both valid and reliable for use among the Russian-speaking population in the United States.
Chen, Po-Yu
2014-01-01
The validness of the expiration dates (validity period) that manufacturers provide on food product labels is a crucial food safety problem. Governments must study how to use their authority by implementing fair awards and punishments to prompt manufacturers into adopting rigorous considerations, such as the effect of adopting new storage methods for extending product validity periods on expected costs. Assuming that a manufacturer sells fresh food or drugs, this manufacturer must respond to current stochastic demands at each unit of time to determine the purchase amount of products for sale. If this decision maker is capable and an opportunity arises, new packaging methods (e.g., aluminum foil packaging, vacuum packaging, high-temperature sterilization after glass packaging, or packaging with various degrees of dryness) or storage methods (i.e., adding desiccants or various antioxidants) can be chosen to extend the validity periods of products. To minimize expected costs, this decision maker must be aware of the processing costs of new storage methods, inventory standards, inventory cycle lengths, and changes in relationships between factors such as stochastic demand functions in a cycle. Based on these changes in relationships, this study established a mathematical model as a basis for discussing the aforementioned topics.
Validation of Screening Assays for Developmental Toxicity: An Exposure-Based Approach
There continue to be widespread efforts to develop assay methods for developmental toxicity that are shorter than the traditional Segment 2 study and use fewer or no animals. As with any alternative test method, novel developmental toxicity assays must be validated by evaluating ...
Imputation of missing data in time series for air pollutants
NASA Astrophysics Data System (ADS)
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
[Data validation methods and discussion on Chinese materia medica resource survey].
Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing
2013-07-01
From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.
USDA-ARS?s Scientific Manuscript database
A collaborative validation study was performed to evaluate the performance of a new U.S. Food and Drug Administration method developed for detection of the protozoan parasite, Cyclospora cayetanensis, on cilantro and raspberries. The method includes a sample preparation step in which oocysts are re...
Determination of vitamin C in foods: current state of method validation.
Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C
2014-11-21
Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review. Copyright © 2014. Published by Elsevier B.V.
Mudge, Elizabeth; Paley, Lori; Schieber, Andreas; Brown, Paula N
2015-10-01
Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86% in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88% RSDr, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8%. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International.
Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah
2015-09-01
Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M
2016-03-01
The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.
Li, Xue; Ahmad, Imad A Haidar; Tam, James; Wang, Yan; Dao, Gina; Blasko, Andrei
2018-02-05
A Total Organic Carbon (TOC) based analytical method to quantitate trace residues of clean-in-place (CIP) detergents CIP100 ® and CIP200 ® on the surfaces of pharmaceutical manufacturing equipment was developed and validated. Five factors affecting the development and validation of the method were identified: diluent composition, diluent volume, extraction method, location for TOC sample preparation, and oxidant flow rate. Key experimental parameters were optimized to minimize contamination and to improve the sensitivity, recovery, and reliability of the method. The optimized concentration of the phosphoric acid in the swabbing solution was 0.05M, and the optimal volume of the sample solution was 30mL. The swab extraction method was 1min sonication. The use of a clean room, as compared to an isolated lab environment, was not required for method validation. The method was demonstrated to be linear with a correlation coefficient (R) of 0.9999. The average recoveries from stainless steel surfaces at multiple spike levels were >90%. The repeatability and intermediate precision results were ≤5% across the 2.2-6.6ppm range (50-150% of the target maximum carry over, MACO, limit). The method was also shown to be sensitive with a detection limit (DL) of 38ppb and a quantitation limit (QL) of 114ppb. The method validation demonstrated that the developed method is suitable for its intended use. The methodology developed in this study is generally applicable to the cleaning verification of any organic detergents used for the cleaning of pharmaceutical manufacturing equipment made of electropolished stainless steel material. Copyright © 2017 Elsevier B.V. All rights reserved.
International Harmonization and Cooperation in the Validation of Alternative Methods.
Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie
The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekechukwu, A
Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA
NASA Astrophysics Data System (ADS)
Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.
2018-03-01
Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.
The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies
ERIC Educational Resources Information Center
O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.
2011-01-01
The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…
Estimation of low back moments from video analysis: a validation study.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H
2011-09-02
This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.
Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.
Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2016-03-11
Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.
Reproducibility and validity of a semi-quantitative FFQ for trace elements.
Lee, Yujin; Park, Kyong
2016-09-01
The aim of this study was to test the reproducibility and validity of a self-administered FFQ for the Trace Element Study of Korean Adults in the Yeungnam area (SELEN). Study subjects were recruited from the SELEN cohort selected from rural and urban areas in Yeungnam, Korea. A semi-quantitative FFQ with 146 items was developed considering the dietary characteristics of cohorts in the study area. In a validation study, seventeen men and forty-eight women aged 38-62 years completed 3-d dietary records (DR) and two FFQ over a 3-month period. The validity was examined with the FFQ and DR, and the reproducibility was estimated using partial correlation coefficients, the Bland-Altman method and cross-classification. There were no significant differences between the mean intakes of selected nutrients as estimated from FFQ1, FFQ2 and DR. The median correlation coefficients for all nutrients were 0·47 and 0·56 in the reproducibility and validity tests, respectively. Bland-Altman's index and cross-classification showed acceptable agreement between FFQ1 and FFQ2 and between FFQ2 and DR. Ultimately, 78 % of the subjects were classified into the same and adjacent quartiles for most nutrients. In addition, the weighted κ value indicated that the two methods agreed fairly. In conclusion, this newly developed FFQ was a suitable dietary assessment method for the SELEN cohort study.
NASA Astrophysics Data System (ADS)
Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.
2018-04-01
The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
[Validation Study for Analytical Method of Diarrhetic Shellfish Poisons in 9 Kinds of Shellfish].
Yamaguchi, Mizuka; Yamaguchi, Takahiro; Kakimoto, Kensaku; Nagayoshi, Haruna; Okihashi, Masahiro; Kajimura, Keiji
2016-01-01
A method was developed for the simultaneous determination of okadaic acid, dinophysistoxin-1 and dinophysistoxin-2 in shellfish using ultra performance liquid chromatography with tandem mass spectrometry. Shellfish poisons in spiked samples were extracted with methanol and 90% methanol, and were hydrolyzed with 2.5 mol/L sodium hydroxide solution. Purification was done on an HLB solid-phase extraction column. This method was validated in accordance with the notification of Ministry of Health, Labour and Welfare of Japan. As a result of the validation study in nine kinds of shellfish, the trueness, repeatability and within-laboratory reproducibility were 79-101%, less than 12 and 16%, respectively. The trueness and precision met the target values of notification.
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
Gowda, Nagaraj; Kumar, Pradeep; Panghal, Surender; Rajshree, Mashru
2010-02-01
This study presents the development and validation of a reversed-phase liquid chromatographic method for the determination of mangiferin (MGN) in alcoholic extracts of mangifera indica. A Lichrospher 100 C(18)-ODS (250 x 4.6 mm, 5 microm size) (Merck, Whitehouse Station, NJ) prepacked column and a mobile phase of potassium dihydrogen orthophosphate (0.01M) pH 2.7 +/- 0.2-acetonitrile (15:85, v/v) with the flow rate of 1 mL/min was used. MGN detection was achieved at a wavelength monitored at 254 nm with SPD-M 10A vp PDA detector or SPD 10AD vp UV detector in combination with class LC 10A software. The proposed method was validated as prescribed by International Conference on Harmonization (ICH) with respect to linearity, specificity, accuracy, precision, stability, and quantification. The method validation was realized using alcoholic extracts and raw materials of leaves and barks. All the validation parameters were within the acceptable limits, and the developed analytical method can successfully be applied for MGN determination.
Sato, Tamaki; Miyamoto, Iori; Uemura, Masako; Nakatani, Tadashi; Kakutani, Naoya; Yamano, Tetsuo
2016-01-01
A validation study was carried out on a rapid method for the simultaneous determination of pesticide residues in vegetables and fruits by LC-MS/MS. Preparation of the test solution was performed by a solid-phase extraction technique with QuEChERS (STQ method). Pesticide residues were extracted with acetonitrile using a homogenizer, followed by salting-out and dehydration at the same time. The acetonitrile layer was purified with C18 and PSA mini-columns. The method was assessed for 130 pesticide residues in 14 kinds of vegetables and fruits at the concentration level of 0.01 μg/g according to the method validation guideline of the Ministry of Health, Labour and Welfare of Japan. As a result 75 to 120 pesticide residues were determined satisfactorily in the tested samples. Thus, this method could be useful for a rapid and simultaneous determination of multi-class pesticide residues in various vegetables and fruits.
Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral
2010-07-01
Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.
Validity of Dietary Assessment in Athletes: A Systematic Review
Beck, Kathryn L.; Gifford, Janelle A.; Slater, Gary; Flood, Victoria M.; O’Connor, Helen
2017-01-01
Dietary assessment methods that are recognized as appropriate for the general population are usually applied in a similar manner to athletes, despite the knowledge that sport-specific factors can complicate assessment and impact accuracy in unique ways. As dietary assessment methods are used extensively within the field of sports nutrition, there is concern the validity of methodologies have not undergone more rigorous evaluation in this unique population sub-group. The purpose of this systematic review was to compare two or more methods of dietary assessment, including dietary intake measured against biomarkers or reference measures of energy expenditure, in athletes. Six electronic databases were searched for English-language, full-text articles published from January 1980 until June 2016. The search strategy combined the following keywords: diet, nutrition assessment, athlete, and validity; where the following outcomes are reported but not limited to: energy intake, macro and/or micronutrient intake, food intake, nutritional adequacy, diet quality, or nutritional status. Meta-analysis was performed on studies with sufficient methodological similarity, with between-group standardized mean differences (or effect size) and 95% confidence intervals (CI) being calculated. Of the 1624 studies identified, 18 were eligible for inclusion. Studies comparing self-reported energy intake (EI) to energy expenditure assessed via doubly labelled water were grouped for comparison (n = 11) and demonstrated mean EI was under-estimated by 19% (−2793 ± 1134 kJ/day). Meta-analysis revealed a large pooled effect size of −1.006 (95% CI: −1.3 to −0.7; p < 0.001). The remaining studies (n = 7) compared a new dietary tool or instrument to a reference method(s) (e.g., food record, 24-h dietary recall, biomarker) as part of a validation study. This systematic review revealed there are limited robust studies evaluating dietary assessment methods in athletes. Existing literature demonstrates the substantial variability between methods, with under- and misreporting of intake being frequently observed. There is a clear need for careful validation of dietary assessment methods, including emerging technical innovations, among athlete populations. PMID:29207495
Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J
2018-05-17
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.
Survival analysis with error-prone time-varying covariates: a risk set calibration approach
Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna
2010-01-01
Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928
Mulder, Leontine; van der Molen, Renate; Koelman, Carin; van Leeuwen, Ester; Roos, Anja; Damoiseaux, Jan
2018-05-01
ISO 15189:2012 requires validation of methods used in the medical laboratory, and lists a series of performance parameters for consideration to include. Although these performance parameters are feasible for clinical chemistry analytes, application in the validation of autoimmunity tests is a challenge. Lack of gold standards or reference methods in combination with the scarcity of well-defined diagnostic samples of patients with rare diseases make validation of new assays difficult. The present manuscript describes the initiative of Dutch medical immunology laboratory specialists to combine efforts and perform multi-center validation studies of new assays in the field of autoimmunity. Validation data and reports are made available to interested Dutch laboratory specialists. Copyright © 2018 Elsevier B.V. All rights reserved.
Testing and Validation of Computational Methods for Mass Spectrometry.
Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas
2016-03-04
High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.
Azari, Nadia; Soleimani, Farin; Vameghi, Roshanak; Sajedi, Firoozeh; Shahshahani, Soheila; Karimi, Hossein; Kraskian, Adis; Shahrokhi, Amin; Teymouri, Robab; Gharib, Masoud
2017-01-01
Bayley Scales of infant & toddler development is a well-known diagnostic developmental assessment tool for children aged 1-42 months. Our aim was investigating the validity & reliability of this scale in Persian speaking children. The method was descriptive-analytic. Translation- back translation and cultural adaptation was done. Content & face validity of translated scale was determined by experts' opinions. Overall, 403 children aged 1 to 42 months were recruited from health centers of Tehran, during years of 2013-2014 for developmental assessment in cognitive, communicative (receptive & expressive) and motor (fine & gross) domains. Reliability of scale was calculated through three methods; internal consistency using Cronbach's alpha coefficient, test-retest and interrater methods. Construct validity was calculated using factor analysis and comparison of the mean scores methods. Cultural and linguistic changes were made in items of all domains especially on communication subscale. Content and face validity of the test were approved by experts' opinions. Cronbach's alpha coefficient was above 0.74 in all domains. Pearson correlation coefficient in various domains, were ≥ 0.982 in test retest method, and ≥0.993 in inter-rater method. Construct validity of the test was approved by factor analysis. Moreover, the mean scores for the different age groups were compared and statistically significant differences were observed between mean scores of different age groups, that confirms validity of the test. The Bayley Scales of Infant and Toddler Development is a valid and reliable tool for child developmental assessment in Persian language children.
Aguiar, Lorena Andrade de; Melo, Lauro; de Lacerda de Oliveira, Lívia
2018-04-03
A major drawback of conventional descriptive profile (CDP) in sensory evaluation is the long time spent in panel training. Rapid descriptive methods (RDM) have increased significantly. Some of them have been compared with CDP for validation. In Health Sciences, systematic reviews (SR) are performed to evaluate validation of diagnostic tests in relation to a gold standard method. SR present a well-defined protocol to summarize research evidence and to evaluate the quality of the studies with determined criteria. We adapted SR protocol to evaluate the validation of RDM against CDP as satisfactory procedures to obtain food characterization. We used "Population Intervention Comparison Outcome Study - PICOS" framework to design the research in which "Population" was food/ beverages; "intervention" were RDM, "Comparison" was CDP as gold standard, "Outcome" was the ability of RDM to generate similar descriptive profiles in comparison with CDP and "Studies" was sensory descriptive analyses. The proportion of studies concluding for similarity of the RDM with CDP ranged from 0% to 100%. Low and moderate risk of bias were reached by 87% and 13% of the studies, respectively, supporting the conclusions of SR. RDM with semi-trained assessors and evaluation of individual attributes presented higher percentages of concordance with CDP.
Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.
Morgeaux, S; Variot, P; Daas, A; Costanzo, A
2013-01-01
The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).
Validity Argument for Assessing L2 Pragmatics in Interaction Using Mixed Methods
ERIC Educational Resources Information Center
Youn, Soo Jung
2015-01-01
This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate…
USDA-ARS?s Scientific Manuscript database
Current methods for assessing children's dietary intake, such as interviewer-administered 24-h dietary recall (24-h DR), are time consuming and resource intensive. Self-administered instruments offer a low-cost diet assessment method for use with children. The present study assessed the validity of ...
The Role of Simulation in Microsurgical Training.
Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer
Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
AZARI, Nadia; SOLEIMANI, Farin; VAMEGHI, Roshanak; SAJEDI, Firoozeh; SHAHSHAHANI, Soheila; KARIMI, Hossein; KRASKIAN, Adis; SHAHROKHI, Amin; TEYMOURI, Robab; GHARIB, Masoud
2017-01-01
Objective Bayley Scales of infant & toddler development is a well-known diagnostic developmental assessment tool for children aged 1–42 months. Our aim was investigating the validity & reliability of this scale in Persian speaking children. Materials & Methods The method was descriptive-analytic. Translation- back translation and cultural adaptation was done. Content & face validity of translated scale was determined by experts’ opinions. Overall, 403 children aged 1 to 42 months were recruited from health centers of Tehran, during years of 2013-2014 for developmental assessment in cognitive, communicative (receptive & expressive) and motor (fine & gross) domains. Reliability of scale was calculated through three methods; internal consistency using Cronbach’s alpha coefficient, test-retest and interrater methods. Construct validity was calculated using factor analysis and comparison of the mean scores methods. Results Cultural and linguistic changes were made in items of all domains especially on communication subscale. Content and face validity of the test were approved by experts’ opinions. Cronbach’s alpha coefficient was above 0.74 in all domains. Pearson correlation coefficient in various domains, were ≥ 0.982 in test retest method, and ≥0.993 in inter-rater method. Construct validity of the test was approved by factor analysis. Moreover, the mean scores for the different age groups were compared and statistically significant differences were observed between mean scores of different age groups, that confirms validity of the test. Conclusion The Bayley Scales of Infant and Toddler Development is a valid and reliable tool for child developmental assessment in Persian language children. PMID:28277556
Validity of three clinical performance assessments of internal medicine clerks.
Hull, A L; Hodder, S; Berger, B; Ginsberg, D; Lindheim, N; Quan, J; Kleinhenz, M E
1995-06-01
To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks. A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991. There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified. There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.
Konda, Ravi Kumar; Chandu, Babu Rao; Challa, B.R.; Kothapalli, Chandrasekhar B.
2012-01-01
The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm) column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. PMID:29403764
Saloheimo, T; González, S A; Erkkola, M; Milauskas, D M; Meisel, J D; Champagne, C M; Tudor-Locke, C; Sarmiento, O; Katzmarzyk, P T; Fogelholm, M
2015-01-01
Objective: The main aim of this study was to assess the reliability and validity of a food frequency questionnaire with 23 food groups (I-FFQ) among a sample of 9–11-year-old children from three different countries that differ on economical development and income distribution, and to assess differences between country sites. Furthermore, we assessed factors associated with I-FFQ's performance. Methods: This was an ancillary study of the International Study of Childhood Obesity, Lifestyle and the Environment. Reliability (n=321) and validity (n=282) components of this study had the same participants. Participation rates were 95% and 70%, respectively. Participants completed two I-FFQs with a mean interval of 4.9 weeks to assess reliability. A 3-day pre-coded food diary (PFD) was used as the reference method in the validity analyses. Wilcoxon signed-rank tests, intraclass correlation coefficients and cross-classifications were used to assess the reliability of I-FFQ. Spearman correlation coefficients, percentage difference and cross-classifications were used to assess the validity of I-FFQ. A logistic regression model was used to assess the relation of selected variables with the estimate of validity. Analyses based on information in the PFDs were performed to assess how participants interpreted food groups. Results: Reliability correlation coefficients ranged from 0.37 to 0.78 and gross misclassification for all food groups was <5%. Validity correlation coefficients were below 0.5 for 22/23 food groups, and they differed among country sites. For validity, gross misclassification was <5% for 22/23 food groups. Over- or underestimation did not appear for 19/23 food groups. Logistic regression showed that country of participation and parental education were associated (P⩽0.05) with the validity of I-FFQ. Analyses of children's interpretation of food groups suggested that the meaning of most food groups was understood by the children. Conclusion: I-FFQ is a moderately reliable method and its validity ranged from low to moderate, depending on food group and country site. PMID:27152180
ERIC Educational Resources Information Center
Aebi, Marcel; Plattner, Belinda; Metzke, Christa Winkler; Bessler, Cornelia; Steinhausen, Hans-Christoph
2013-01-01
Background: Different dimensions of oppositional defiant disorder (ODD) have been found as valid predictors of further mental health problems and antisocial behaviors in youth. The present study aimed at testing the construct, concurrent, and predictive validity of ODD dimensions derived from parent- and self-report measures. Method: Confirmatory…
Evidence-based dentistry: analysis of dental anxiety scales for children.
Al-Namankany, A; de Souza, M; Ashley, P
2012-03-09
To review paediatric dental anxiety measures (DAMs) and assess the statistical methods used for validation and their clinical implications. A search of four computerised databases between 1960 and January 2011 associated with DAMs, using pre-specified search terms, to assess the method of validation including the reliability as intra-observer agreement 'repeatability or stability' and inter-observer agreement 'reproducibility' and all types of validity. Fourteen paediatric DAMs were predominantly validated in schools and not in the clinical setting while five of the DAMs were not validated at all. The DAMs that were validated were done so against other paediatric DAMs which may not have been validated previously. Reliability was not assessed in four of the DAMs. However, all of the validated studies assessed reliability which was usually 'good' or 'acceptable'. None of the current DAMs used a formal sample size technique. Diversity was seen between the studies ranging from a few simple pictograms to lists of questions reported by either the individual or an observer. To date there is no scale that can be considered as a gold standard, and there is a need to further develop an anxiety scale with a cognitive component for children and adolescents.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Misu, Shogo; Asai, Tsuyoshi; Ono, Rei; Sawa, Ryuichi; Tsutsumimoto, Kota; Ando, Hiroshi; Doi, Takehiko
2017-09-01
The heel is likely a suitable location to which inertial sensors are attached for the detection of gait events. However, there are few studies to detect gait events and determine temporal gait parameters using sensors attached to the heels. We developed two methods to determine temporal gait parameters: detecting heel-contact using acceleration and detecting toe-off using angular velocity data (acceleration-angular velocity method; A-V method), and detecting both heel-contact and toe-off using angular velocity data (angular velocity-angular velocity method; V-V method). The aim of this study was to examine the concurrent validity of the A-V and V-V methods against the standard method, and to compare their accuracy. Temporal gait parameters were measured in 10 younger and 10 older adults. The intra-class correlation coefficients were excellent in both methods compared with the standard method (0.80 to 1.00). The root mean square errors of stance and swing time in the A-V method were smaller than the V-V method in older adults, although there were no significant discrepancies in the other comparisons. Our study suggests that inertial sensors attached to the heels, using the A-V method in particular, provide a valid measurement of temporal gait parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
Schärer, Lars O; Krienke, Ute J; Graf, Sandra-Mareike; Meltzer, Katharina; Langosch, Jens M
2015-03-14
Long-term monitoring in bipolar affective disorders constitutes an important therapeutic and preventive method. The present study examines the validity of the Personal Life-Chart App (PLC App), in both German and in English. This App is based on the National Institute of Mental Health's Life-Chart Method, the de facto standard for long-term monitoring in the treatment of bipolar disorders. Methods have largely been replicated from 2 previous Life-Chart studies. The participants documented Life-Charts with the PLC App on a daily basis. Clinicians assessed manic and depressive symptoms in clinical interviews using the Inventory of Depressive Symptomatology, clinician-rated (IDS-C) and the Young Mania Rating Scale (YMRS) on a monthly basis on average. Spearman correlations of the total scores of IDS-C and YMRS were calculated with both the Life-Chart functional impairment rating and mood rating documented with the PLC App. 44 subjects used the PLC App in German and 10 subjects used the PLC App in English. 118 clinical interviews from the German sub-sample and 97 from the English sub-sample were analysed separately. The results in both sub-samples are similar to previous Life-Chart validation studies. Again statistically significant high correlations were found between the Life-Chart function rating assigned through the PLC App and well-established observer-rated methods. Again correlations were weaker for the Life-Chart mood rating than for the Life-Chart function impairment. No relevant correlation was found between the Life-chart mood rating and YMRS in the German sub-sample. This study gives further evidence for the validity of the Life-Chart method as a valid tool for the recognition of both manic and depressive episodes. Documenting Life-Charts with the PLC App (English and German) does not seem to impair the validity of patient ratings.
Experiences Using Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1996-01-01
This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Pérez-Lozano, P; García-Montoya, E; Orriols, A; Miñarro, M; Ticó, J R; Suñé-Negre, J M
2005-10-04
A new HPLC-RP method has been developed and validated for the simultaneous determination of benzocaine, two preservatives (propylparaben (nipasol) and benzyl alcohol) and degradation products of benzocaine in a semisolid pharmaceutical dosage form (benzocaine gel). The method uses a Nucleosil 120 C18 column and gradient elution. The mobile phase consisted of a mixture of methanol and glacial acetic acid (10%, v/v) at different proportion according to a time-schedule programme, pumped at a flow rate of 2.0 ml min(-1). The DAD detector was set at 258 nm. The validation study was carried out fulfilling the ICH guidelines in order to prove that the new analytical method, meets the reliability characteristics, and these characteristics showed the capacity of analytical method to keep, throughout the time, the fundamental criteria for validation: selectivity, linearity, precision, accuracy and sensitivity. The method was applied during the quality control of benzocaine gel in order to quantify the drug (benzocaine), preservatives and degraded products and proved to be suitable for rapid and reliable quality control method.
Alternative methods to determine headwater benefits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Y.S.; Perlack, R.D.; Sale, M.J.
1997-11-10
In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less
Katagiri, Ryoko; Muto, Go; Sasaki, Satoshi
2018-06-01
A validated questionnaire is not typically used for dietary assessment in health check-up counseling provided by occupational health nurses in Japan. We conducted a qualitative study to investigate the barriers and promoting factors affecting the use of validated questionnaires. Ten occupational health nurses and three registered dietitians, working at a health insurance society, were recruited for this study using an open-ended, free description questionnaire. Inhibiting factors, such as "Feeling of satisfaction with the current method," "Recognition of importance," and "Sense of burden from the questionnaire", and as promoting factors, "Feeling the current method is insufficient", "Recognition of importance," "Reduction in the feeling of burden after the answer," "Expectation of and reaction to the result," and "Expectation for the effect of the counseling" were noted. Since a standardized dietary assessment method in health counseling might be desirable for the harmonization of work with diseases prevention in an occupational field, findings in this study could propose appropriate targets to reduce confusion in health professionals' concerning the use of validated questionnaires.
Schmidt, Kathrin S; Mankertz, Joachim
2018-06-01
A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.
Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo
2013-01-01
A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778
A study in the founding of applied behavior analysis through its publications.
Morris, Edward K; Altus, Deborah E; Smith, Nathaniel G
2013-01-01
This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research.
A Study in the Founding of Applied Behavior Analysis Through Its Publications
Morris, Edward K.; Altus, Deborah E.; Smith, Nathaniel G.
2013-01-01
This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research. PMID:25729133
Experiences Using Lightweight Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1997-01-01
This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
ERIC Educational Resources Information Center
Pogrund, Rona L.; Darst, Shannon; Munro, Michael P.
2015-01-01
Introduction: The purpose of this study was to begin validation of a scale that will be used by teachers of students with visual impairments to determine appropriate recommended type and frequency of services for their students based on identified student need. Methods: Validity and reliability of the Visual Impairment Scale of Service Intensity…
McLean, Rachael M; Farmer, Victoria L; Nettleton, Alice; Cameron, Claire M; Cook, Nancy R; Campbell, Norman R C
2017-12-01
Food frequency questionnaires (FFQs) are often used to assess dietary sodium intake, although 24-hour urinary excretion is the most accurate measure of intake. The authors conducted a systematic review to investigate whether FFQs are a reliable and valid way of measuring usual dietary sodium intake. Results from 18 studies are described in this review, including 16 validation studies. The methods of study design and analysis varied widely with respect to FFQ instrument, number of 24-hour urine collections collected per participant, methods used to assess completeness of urine collections, and statistical analysis. Overall, there was poor agreement between estimates from FFQ and 24-hour urine. The authors suggest a framework for validation and reporting based on a consensus statement (2004), and recommend that all FFQs used to estimate dietary sodium intake undergo validation against multiple 24-hour urine collections. ©2017 Wiley Periodicals, Inc.
Iordanova, B.; Rosenbaum, D.; Norman, D.; Weiner, M.; Studholme, C.
2007-01-01
BACKGROUND AND PURPOSE Brain volumetry is widely used for evaluating tissue degeneration; however, the parcellation methods are rarely validated and use arbitrary planes to mark boundaries of brain regions. The goal of this study was to develop, validate, and apply an MR imaging tracing method for the parcellation of 3 major gyri of the frontal lobe, which uses only local landmarks intrinsic to the structures of interest, without the need for global reorientation or the use of dividing planes or lines. METHODS Studies were performed on 25 subjects—healthy controls and subjects diagnosed with Lewy body dementia and Alzheimer disease—with significant variation in the underlying gyral anatomy and state of atrophy. The protocol was evaluated by using multiple observers tracing scans of subjects diagnosed with neurodegenerative disease and those aging normally, and the results were compared by spatial overlap agreement. To confirm the results, observers marked the same locations in different brains. We illustrated the variabilities of the key boundaries that pose the greatest challenge to defining consistent parcellations across subjects. RESULTS The resulting gyral volumes were evaluated, and their consistency across raters was used as an additional assessment of the validity of our marking method. The agreement on a scale of 0–1 was found to be 0.83 spatial and 0.90 volumetric for the same rater and 0.85 spatial and 0.90 volumetric for 2 different raters. The results revealed that the protocol remained consistent across different neurodegenerative conditions. CONCLUSION Our method provides a simple and reliable way for the volumetric evaluation of frontal lobe neurodegeneration and can be used as a resource for larger comparative studies as well as a validation procedure of automated algorithms. PMID:16971629
Adolescent Personality: A Five-Factor Model Construct Validation
ERIC Educational Resources Information Center
Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.
2004-01-01
The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…
DOT National Transportation Integrated Search
1979-03-01
There are several conditions that can influence the calculation of the statistical validity of a test battery such as that used to selected Air Traffic Control Specialists. Two conditions of prime importance to statistical validity are recruitment pr...
A Delphi Study and Initial Validation of Counselor Supervision Competencies
ERIC Educational Resources Information Center
Neuer Colburn, Anita A.; Grothaus, Tim; Hays, Danica G.; Milliken, Tammi
2016-01-01
The authors addressed the lack of supervision training standards for doctoral counseling graduates by developing and validating an initial list of supervision competencies. They used content analysis, Delphi polling, and content validity methods to generate a list, vetted by 2 different panels of supervision experts, of 33 competencies grouped…
Najimi, Arash; Mostafavi, Firoozeh; Sharifirad, Gholamreza; Golshiri, Parastoo
2017-01-01
BACKGROUND: This study was aimed at developing and studying the scale of self-efficacy in adherence to treatment in Iranian patients with hypertension. METHODS: A mix-method study was conducted on the two stages: in the first phase, a qualitative study was done using content analysis through deep and semi-structured interviews. After data analysis, the draft of tool was prepared. Items in the draft were selected based on the extracted concepts. In the second phase, validity and reliability of the instrument were implemented using a quantitative study. The prepared instrument in the first phase was studied among 612 participants. To test the construct validity and internal consistency, exploratory factor analysis and Cronbach's alpha were used, respectively. To study the validity of the final scale, the average score of self-efficacy in patients with controlled hypertension were compared with patients with uncontrolled hypertension. RESULTS: In overall, 16 patients were interviewed. Twenty-six items were developed to assess different concepts of self-efficacy. Concept-related items were extracted from interviews to study the face validity of the tool from patient's point of view. Four items were deleted because scored 0.79 in content validity. The mean of questionnaire content validity was 0.85. Items were collected in two factors with an eigenvalue >1. Four items were deleted with load factor <0.4. Reliability was 0.84 for the entire instrument. CONCLUSION: Self-efficacy scale in patients with hypertension is a valid and reliable instrument that can effectively evaluate the self-efficacy in medication adherence in the management of hypertension. PMID:29114551
Benni, Paul B; MacLeod, David; Ikeda, Keita; Lin, Hung-Mo
2018-04-01
We describe the validation methodology for the NIRS based FORE-SIGHT ELITE ® (CAS Medical Systems, Inc., Branford, CT, USA) tissue oximeter for cerebral and somatic tissue oxygen saturation (StO 2 ) measurements for adult subjects submitted to the United States Food and Drug Administration (FDA) to obtain clearance for clinical use. This validation methodology evolved from a history of NIRS validations in the literature and FDA recommended use of Deming regression and bootstrapping statistical validation methods. For cerebral validation, forehead cerebral StO 2 measurements were compared to a weighted 70:30 reference (REF CX B ) of co-oximeter internal jugular venous and arterial blood saturation of healthy adult subjects during a controlled hypoxia sequence, with a sensor placed on the forehead. For somatic validation, somatic StO 2 measurements were compared to a weighted 70:30 reference (REF CX S ) of co-oximetry central venous and arterial saturation values following a similar protocol, with sensors place on the flank, quadriceps muscle, and calf muscle. With informed consent, 25 subjects successfully completed the cerebral validation study. The bias and precision (1 SD) of cerebral StO 2 compared to REF CX B was -0.14 ± 3.07%. With informed consent, 24 subjects successfully completed the somatic validation study. The bias and precision of somatic StO 2 compared to REF CX S was 0.04 ± 4.22% from the average of flank, quadriceps, and calf StO 2 measurements to best represent the global whole body REF CX S . The NIRS validation methods presented potentially provide a reliable means to test NIRS monitors and qualify them for clinical use.
The Arthroscopic Surgical Skill Evaluation Tool (ASSET)
Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.
2014-01-01
Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808
Castro-Vega, Iciar; Veses Martín, Silvia; Cantero Llorca, Juana; Barrios Marta, Cristina; Bañuls, Celia; Hernández-Mijares, Antonio
2018-03-09
Nutritional screening allows for the detection of nutritional risk. Validated tools should be implemented, and their usefulness should be contrasted with a gold standard. The aim of this study is to discover the validity, efficacy and reliability of 3 nutritional screening tools in relation to complete nutritional assessment. A sub-analysis of a cross-sectional and descriptive study on the prevalence of disease-related malnutrition. The sample was selected from outpatients, hospitalized and institutionalized patients. MUST, MNAsf and MST screening were employed. A nutritional assessment of all the patients was undertaken. The SENPE-SEDOM consensus was used for the diagnosis. In the outpatients, both MUST and MNAsf have a similar validity in relation to the nutritional assessment (AUC 0.871 and 0.883, respectively). In the institutionalized patients, the MUST screening method is the one that shows the greatest validity (AUC 0.815), whereas in the hospitalized patients, the most valid methods are both MUST and MST (AUC 0.868 and 0.853, respectively). It is essential to use nutritional screening to invest the available resources wisely. Based on our results, MUST is the most suitable screening method in hospitalized and institutionalized patients. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Rollier, Patricia; Lombard, Bertrand; Guillier, Laurent; François, Danièle; Romero, Karol; Pierru, Sylvie; Bouhier, Laurence; Gnanou Besse, Nathalie
2018-05-01
The reference method for the detection and enumeration of L. monocytogenes in food (Standards EN ISO 11290-1&2) have been validated by inter-laboratory studies in the frame of the Mandate M381 from European Commission to CEN. In this paper, the inter-laboratory studies led in 2013 on 5 matrices (cold-smoked salmon, milk powdered infant food formula, vegetables, environment, and cheese) to validate Standard EN ISO 11290-2 are reported. According to the results obtained, the method of the revised Standard EN ISO 11290-2 can be considered as a good method for the enumeration of L. monocytogenes in foods and food processing environment, in particular for the matrices included in the study. Values of repeatability and reproducibility standard deviations can be considered satisfactory for this type of method with a confirmation stage, since most of them were below 0.3 log 10 , also at low levels, close to the regulatory limit of 100 CFU/g. Copyright © 2018 Elsevier B.V. All rights reserved.
Graf, Tyler N; Cech, Nadja B; Polyak, Stephen J; Oberlies, Nicholas H
2016-07-15
Validated methods are needed for the analysis of natural product secondary metabolites. These methods are particularly important to translate in vitro observations to in vivo studies. Herein, a method is reported for the analysis of the key secondary metabolites, a series of flavonolignans and a flavonoid, from an extract prepared from the seeds of milk thistle [Silybum marianum (L.) Gaertn. (Asteraceae)]. This report represents the first UHPLC MS-MS method validated for quantitative analysis of these compounds. The method takes advantage of the excellent resolution achievable with UHPLC to provide a complete analysis in less than 7min. The method is validated using both UV and MS detectors, making it applicable in laboratories with different types of analytical instrumentation available. Lower limits of quantitation achieved with this method range from 0.0400μM to 0.160μM with UV and from 0.0800μM to 0.160μM with MS. The new method is employed to evaluate variability in constituent composition in various commercial S. marianum extracts, and to show that storage of the milk thistle compounds in DMSO leads to degradation. Copyright © 2016 Elsevier B.V. All rights reserved.
The Predictive Validity of Dynamic Assessment: A Review
ERIC Educational Resources Information Center
Caffrey, Erin; Fuchs, Douglas; Fuchs, Lynn S.
2008-01-01
The authors report on a mixed-methods review of 24 studies that explores the predictive validity of dynamic assessment (DA). For 15 of the studies, they conducted quantitative analyses using Pearson's correlation coefficients. They descriptively examined the remaining studies to determine if their results were consistent with findings from the…
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
Validity of body composition methods across ethnic population groups.
Deurenberg, P; Deurenberg-Yap, M
2003-10-01
Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.
Concept analysis and validation of the nursing diagnosis, delayed surgical recovery.
Appoloni, Aline Helena; Herdman, T Heather; Napoleão, Anamaria Alves; Campos de Carvalho, Emilia; Hortense, Priscilla
2013-10-01
To analyze the human response of delayed surgical recovery, approved by NANDA-I, and to validate its defining characteristics (DCs) and related factors (RFs). This was a two-part study using a concept analysis based on the method of Walker and Avant, and diagnostic content validation based on Fehring's model. Three of the original DCs, and three proposed DCs identified from the concept analysis, were validated in this study; five of the original RFs and four proposed RFs were validated. A revision of the concept studied is suggested, incorporating the validation of some of the DCs and RFs presented by NANDA-I, and the insertion of new, validated DCs and RFs. This study may enable the extension of the use of this diagnosis and contribute to quality surgical care of clients. © 2013, The Authors. International Journal of Nursing Knowledge © 2013, NANDA International.
Iridology: A systematic review.
Ernst, E
1999-02-01
Iridologists claim to be able to diagnose medical conditions through abnormalities of pigmentation in the iris. This technique is popular in many countries. Therefore it is relevant to ask whether it is valid. To systematically review all interpretable tests of the validity of iridology as a diagnostic tool. DATA SOURCE AND EXTRACTION: Three independent literature searches were performed to identify all blinded tests. Data were extracted in a predefined, standardized fashion. Four case control studies were found. The majority of these investigations suggests that iridology is not a valid diagnostic method. The validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries—conjoint analysis—which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods. PMID:25674069
Kim, Joseph; Flick, Jeanette; Reimer, Michael T; Rodila, Ramona; Wang, Perry G; Zhang, Jun; Ji, Qin C; El-Shourbagy, Tawakol A
2007-11-01
As an effective DPP-IV inhibitor, 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279), is an investigational drug candidate under development at Abbott Laboratories for potential treatment of type 2 diabetes. In order to support the development of ABT-279, multiple analytical methods for an accurate, precise and selective concentration determination of ABT-279 in different matrices were developed and validated in accordance with the US Food and Drug Administration Guidance on Bioanalytical Method Validation. The analytical method for ABT-279 in dog plasma was validated in parallel to other validations for ABT-279 determination in different matrices. In order to shorten the sample preparation time and increase method precision, an automated multi-channel liquid handler was used to perform high-throughput protein precipitation and all other liquid transfers. The separation was performed through a Waters YMC ODS-AQ column (2.0 x 150 mm, 5 microm, 120 A) with a mobile phase of 20 mm ammonium acetate in 20% acetonitrile at a flow rate of 0.3 mL/min. Data collection started at 2.2 min and continued for 2.0 min. The validated linear dynamic range in dog plasma was between 3.05 and 2033.64 ng/mL using a 50 microL sample volume. The achieved r(2) coefficient of determination from three consecutive runs was between 0.998625 and 0.999085. The mean bias was between -4.1 and 4.3% for all calibration standards including lower limit of quantitation. The mean bias was between -8.0 and 0.4% for the quality control samples. The precision, expressed as a coefficient of variation (CV), was < or =4.1% for all levels of quality control samples. The validation results demonstrated that the high-throughput method was accurate, precise and selective for the determination of ABT-279 in dog plasma. The validated method was also employed to support two toxicology studies. The passing rate was 100% for all 49 runs from one validation study and two toxicology studies. Copyright (c) 2007 John Wiley & Sons, Ltd.
Larsen, Camilla Marie; Juul-Kristensen, Birgit; Lund, Hans; Søgaard, Karen
2014-10-01
The aims were to compile a schematic overview of clinical scapular assessment methods and critically appraise the methodological quality of the involved studies. A systematic, computer-assisted literature search using Medline, CINAHL, SportDiscus and EMBASE was performed from inception to October 2013. Reference lists in articles were also screened for publications. From 50 articles, 54 method names were identified and categorized into three groups: (1) Static positioning assessment (n = 19); (2) Semi-dynamic (n = 13); and (3) Dynamic functional assessment (n = 22). Fifteen studies were excluded for evaluation due to no/few clinimetric results, leaving 35 studies for evaluation. Graded according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN checklist), the methodological quality in the reliability and validity domains was "fair" (57%) to "poor" (43%), with only one study rated as "good". The reliability domain was most often investigated. Few of the assessment methods in the included studies that had "fair" or "good" measurement property ratings demonstrated acceptable results for both reliability and validity. We found a substantially larger number of clinical scapular assessment methods than previously reported. Using the COSMIN checklist the methodological quality of the included measurement properties in the reliability and validity domains were in general "fair" to "poor". None were examined for all three domains: (1) reliability; (2) validity; and (3) responsiveness. Observational evaluation systems and assessment of scapular upward rotation seem suitably evidence-based for clinical use. Future studies should test and improve the clinimetric properties, and especially diagnostic accuracy and responsiveness, to increase utility for clinical practice.
Cheng, Shu-Fen; Rose, Susan
2009-01-01
This study investigated the technical adequacy of curriculum-based measures of written expression (CBM-W) in terms of writing prompts and scoring methods for deaf and hard-of-hearing students. Twenty-two students at the secondary school-level completed 3-min essays within two weeks, which were scored for nine existing and alternative curriculum-based measurement (CBM) scoring methods. The technical features of the nine scoring methods were examined for interrater reliability, alternate-form reliability, and criterion-related validity. The existing CBM scoring method--number of correct minus incorrect word sequences--yielded the highest reliability and validity coefficients. The findings from this study support the use of the CBM-W as a reliable and valid tool for assessing general writing proficiency with secondary students who are deaf or hard of hearing. The CBM alternative scoring methods that may serve as additional indicators of written expression include correct subject-verb agreements, correct clauses, and correct morphemes.
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
A systematic review of the quality of homeopathic clinical trials
Jonas, Wayne B; Anderson, Rachel L; Crawford, Cindy C; Lyons, John S
2001-01-01
Background While a number of reviews of homeopathic clinical trials have been done, all have used methods dependent on allopathic diagnostic classifications foreign to homeopathic practice. In addition, no review has used established and validated quality criteria allowing direct comparison of the allopathic and homeopathic literature. Methods In a systematic review, we compared the quality of clinical-trial research in homeopathy to a sample of research on conventional therapies using a validated and system-neutral approach. All clinical trials on homeopathic treatments with parallel treatment groups published between 1945–1995 in English were selected. All were evaluated with an established set of 33 validity criteria previously validated on a broad range of health interventions across differing medical systems. Criteria covered statistical conclusion, internal, construct and external validity. Reliability of criteria application is greater than 0.95. Results 59 studies met the inclusion criteria. Of these, 79% were from peer-reviewed journals, 29% used a placebo control, 51% used random assignment, and 86% failed to consider potentially confounding variables. The main validity problems were in measurement where 96% did not report the proportion of subjects screened, and 64% did not report attrition rate. 17% of subjects dropped out in studies where this was reported. There was practically no replication of or overlap in the conditions studied and most studies were relatively small and done at a single-site. Compared to research on conventional therapies the overall quality of studies in homeopathy was worse and only slightly improved in more recent years. Conclusions Clinical homeopathic research is clearly in its infancy with most studies using poor sampling and measurement techniques, few subjects, single sites and no replication. Many of these problems are correctable even within a "holistic" paradigm given sufficient research expertise, support and methods. PMID:11801202
USDA-ARS?s Scientific Manuscript database
In this study, optimization, extension, and validation of a streamlined, qualitative and quantitative multiclass, multiresidue method was conducted to monitor great than100 veterinary drug residues in meat using ultrahigh-performance liquid chromatography – tandem mass spectrometry (UHPLC-MS/MS). I...
Validity of a Simulation Game as a Method for History Teaching
ERIC Educational Resources Information Center
Corbeil, Pierre; Laveault, Dany
2011-01-01
The aim of this research is, first, to determine the validity of a simulation game as a method of teaching and an instrument for the development of reasoning and, second, to study the relationship between learning and students' behavior toward games. The participants were college students in a History of International Relations course, with two…
Cumulative query method for influenza surveillance using search engine data.
Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il
2014-12-16
Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.
Whelan, Maurice; Eskes, Chantra
Validation is essential for the translation of newly developed alternative approaches to animal testing into tools and solutions suitable for regulatory applications. Formal approaches to validation have emerged over the past 20 years or so and although they have helped greatly to progress the field, it is essential that the principles and practice underpinning validation continue to evolve to keep pace with scientific progress. The modular approach to validation should be exploited to encourage more innovation and flexibility in study design and to increase efficiency in filling data gaps. With the focus now on integrated approaches to testing and assessment that are based on toxicological knowledge captured as adverse outcome pathways, and which incorporate the latest in vitro and computational methods, validation needs to adapt to ensure it adds value rather than hinders progress. Validation needs to be pursued both at the method level, to characterise the performance of in vitro methods in relation their ability to detect any association of a chemical with a particular pathway or key toxicological event, and at the methodological level, to assess how integrated approaches can predict toxicological endpoints relevant for regulatory decision making. To facilitate this, more emphasis needs to be given to the development of performance standards that can be applied to classes of methods and integrated approaches that provide similar information. Moreover, the challenge of selecting the right reference chemicals to support validation needs to be addressed more systematically, consistently and in a manner that better reflects the state of the science. Above all however, validation requires true partnership between the development and user communities of alternative methods and the appropriate investment of resources.
Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza
2018-02-01
The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.
Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-10-13
Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.
Jasiecka-Mikołajczyk, A; Jaroszewski, J J
2017-03-01
Tigecycline (TIG), a novel glycylcycline antibiotic, plays an important role in the management of complicated skin and intra-abdominal infections. The available data lack any description of a method for determination of TIG in avian plasma. In our study, a selective, accurate and reversed-phase high performance liquid chromatography-tandem mass spectrometry method was developed for the determination of TIG in turkey plasma. Sample preparation was based on protein precipitation and liquid-liquid extraction using 1,2-dichloroethane. Chromatographic separation of TIG and minocycline (internal standard, IS) was achieved on an Atlantis T3 column (150 mm × 3.0 mm, 3.0 μm) using gradient elution. The selected reaction monitoring transitions were performed at 293.60 m/z → 257.10 m/z for TIG and 458.00 m/z → 441.20 m/z for IS. The developed method was validated in terms of specificity, selectivity, linearity, lowest limit of quantification, limit of detection, precision, accuracy, matrix effect, carry-over effect, extraction recovery and stability. All parameters of the method submitted to validation met the acceptance criteria. The assay was linear over the concentration range of 0.01-100 μg/ml. This validated method was successfully applied to a TIG pharmacokinetic study in turkey after intravenous and oral administration at a dose of 10 mg/kg at various time-points.
Developing and validating risk prediction models in an individual participant data meta-analysis
2014-01-01
Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Heinig, Katja; Miya, Kazuhiro; Kamei, Tomonori; Guerini, Elena; Fraier, Daniela; Yu, Li; Bansal, Surendra; Morcos, Peter N
2016-07-01
Alectinib is a novel anaplastic lymphoma kinase (ALK) inhibitor for treatment of patients with ALK-positive non-small-cell lung cancer who have progressed on or are intolerant to crizotinib. To support clinical development, concentrations of alectinib and metabolite M4 were determined in plasma from patients and healthy subjects. LC-MS/MS methods were developed and validated in two different laboratories: Chugai used separate assays for alectinib and M4 in a pivotal Phase I/II study while Roche established a simultaneous assay for both analytes for another pivotal study and all other studies. Cross-validation assessment revealed a bias between the two bioanalytical laboratories, which was confirmed with the clinical PK data between both pivotal studies using the different bioanalytical methods.
Semi-automating the manual literature search for systematic reviews increases efficiency.
Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald
2010-03-01
To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.
Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin
2018-05-01
Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.
Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E
2017-08-15
Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg
2018-10-15
A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rebouças, Camila Tavares; Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes
2018-05-18
Background: A green analytical chemistry method was developed for quantification of enrofloxacin in tablets. The drug, a second-generation fluoroquinolone, was first introduced in veterinary medicine for the treatment of various bacterial species. Objective: This study proposed to develop, validate, and apply a reliable, low-cost, fast, and simple IR spectroscopy method for quantitative routine determination of enrofloxacin in tablets. Methods: The method was completely validated according to the International Conference on Harmonisation guidelines, showing accuracy, precision, selectivity, robustness, and linearity. Results: It was linear over the concentration range of 1.0-3.0 mg with correlation coefficients >0.9999 and LOD and LOQ of 0.12 and 0.36 mg, respectively. Conclusions: Now that this IR method has met performance qualifications, it can be adopted and applied for the analysis of enrofloxacin tablets for production process control. The validated method can also be utilized to quantify enrofloxacin in tablets and thus is an environmentally friendly alternative for the routine analysis of enrofloxacin in quality control. Highlights: A new green method for the quantitative analysis of enrofloxacin by Fourier-Transform Infrared spectroscopy was validated. It is a fast, clean and low-cost alternative for the evaluation of enrofloxacin tablets.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai
2015-01-01
A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.
Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract
NASA Astrophysics Data System (ADS)
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.
2010-01-01
Background The purpose of this study was to reduce the number of items, create a scoring method and assess the psychometric properties of the Freedom from Glasses Value Scale (FGVS), which measures benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal intraocular lens (IOL) surgery. Methods The 21-item FGVS, developed simultaneously in French and Spanish, was administered by phone during an observational study to 152 French and 152 Spanish patients who had undergone cataract or presbyopia surgery at least 1 year before the study. Reduction of items and creation of the scoring method employed statistical methods (principal component analysis, multitrait analysis) and content analysis. Psychometric properties (validation of the structure, internal consistency reliability, and known-group validity) of the resulting version were assessed in the pooled population and per country. Results One item was deleted and 3 were kept but not aggregated in a dimension. The other 17 items were grouped into 2 dimensions ('global evaluation', 9 items; 'advantages', 8 items) and divided into 5 sub-dimensions, with higher scores indicating higher benefit of surgery. The structure was validated (good item convergent and discriminant validity). Internal consistency reliability was good for all dimensions and sub-dimensions (Cronbach's alphas above 0.70). The FGVS was able to discriminate between patients wearing glasses or not after surgery (higher scores for patients not wearing glasses). FGVS scores were significantly higher in Spain than France; however, the measure had similar psychometric performances in both countries. Conclusions The FGVS is a valid and reliable instrument measuring benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal IOL surgery. PMID:20497555
NASA Astrophysics Data System (ADS)
Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa
2018-03-01
In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.
Machado, J C; Lange, A D; Todeschini, V; Volpato, N M
2014-02-01
A dissolution method to analyze atorvastatin tablets using in vivo data for RP and test pilot (PB) was developed and validated. The appropriate conditions were determined after solubility tests using different media, and sink conditions were established. The conditions used were equipment paddle at 50 rpm and 900 mL of potassium phosphate buffer pH 6.0 as dissolution medium. In vivo release profiles were obtained from the bioequivalence study of RP and the generic candidate PB. The fraction of dose absorbed was calculated using the Loo-Riegelman method. It was necessary to use a scale factor of time similar to 6.0, to associate the values of absorbed fraction and dissolved fraction, obtaining an in vivo-in vitro correlation level A. The dissolution method to quantify the amount of drug dissolved was validated using high-performance liquid chromatography and ultraviolet spectrophotometry, and validated according to the USP protocol. The discriminative power of dissolution conditions was assessed using two different pilot batches of atorvastatin tablets (PA and PB) and RP. The dissolution test was validated and may be used as a discriminating method in quality control and in the development of the new formulations.
Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O
2010-01-01
The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiaolin; Ye, Li; Wang, Xiaoxiang
2012-12-15
Several recent reports suggested that hydroxylated polybrominated diphenyl ethers (HO-PBDEs) may disturb thyroid hormone homeostasis. To illuminate the structural features for thyroid hormone activity of HO-PBDEs and the binding mode between HO-PBDEs and thyroid hormone receptor (TR), the hormone activity of a series of HO-PBDEs to thyroid receptors β was studied based on the combination of 3D-QSAR, molecular docking, and molecular dynamics (MD) methods. The ligand- and receptor-based 3D-QSAR models were obtained using Comparative Molecular Similarity Index Analysis (CoMSIA) method. The optimum CoMSIA model with region focusing yielded satisfactory statistical results: leave-one-out cross-validation correlation coefficient (q{sup 2}) was 0.571 andmore » non-cross-validation correlation coefficient (r{sup 2}) was 0.951. Furthermore, the results of internal validation such as bootstrapping, leave-many-out cross-validation, and progressive scrambling as well as external validation indicated the rationality and good predictive ability of the best model. In addition, molecular docking elucidated the conformations of compounds and key amino acid residues at the docking pocket, MD simulation further determined the binding process and validated the rationality of docking results. -- Highlights: ► The thyroid hormone activities of HO-PBDEs were studied by 3D-QSAR. ► The binding modes between HO-PBDEs and TRβ were explored. ► 3D-QSAR, molecular docking, and molecular dynamics (MD) methods were performed.« less
Development and Validation of the Sorokin Psychosocial Love Inventory for Divorced Individuals
ERIC Educational Resources Information Center
D'Ambrosio, Joseph G.; Faul, Anna C.
2013-01-01
Objective: This study describes the development and validation of the Sorokin Psychosocial Love Inventory (SPSLI) measuring love actions toward a former spouse. Method: Classical measurement theory and confirmatory factor analysis (CFA) were utilized with an a priori theory and factor model to validate the SPSLI. Results: A 15-item scale…
Peng, Yaguang; Li, Wei; Wang, Yang; Chen, Hui; Bo, Jian; Wang, Xingyu; Liu, Lisheng
2016-01-01
24-h urinary sodium excretion is the gold standard for evaluating dietary sodium intake, but it is often not feasible in large epidemiological studies due to high participant burden and cost. Three methods—Kawasaki, INTERSALT, and Tanaka—have been proposed to estimate 24-h urinary sodium excretion from a spot urine sample, but these methods have not been validated in the general Chinese population. This aim of this study was to assess the validity of three methods for estimating 24-h urinary sodium excretion using spot urine samples against measured 24-h urinary sodium excretion in a Chinese sample population. Data are from a substudy of the Prospective Urban Rural Epidemiology (PURE) study that enrolled 120 participants aged 35 to 70 years and collected their morning fasting urine and 24-h urine specimens. Bias calculations (estimated values minus measured values) and Bland-Altman plots were used to assess the validity of the three estimation methods. 116 participants were included in the final analysis. Mean bias for the Kawasaki method was -740 mg/day (95% CI: -1219, 262 mg/day), and was the lowest among the three methods. Mean bias for the Tanaka method was -2305 mg/day (95% CI: -2735, 1875 mg/day). Mean bias for the INTERSALT method was -2797 mg/day (95% CI: -3245, 2349 mg/day), and was the highest of the three methods. Bland-Altman plots indicated that all three methods underestimated 24-h urinary sodium excretion. The Kawasaki, INTERSALT and Tanaka methods for estimation of 24-h urinary sodium excretion using spot urines all underestimated true 24-h urinary sodium excretion in this sample of Chinese adults. Among the three methods, the Kawasaki method was least biased, but was still relatively inaccurate. A more accurate method is needed to estimate the 24-h urinary sodium excretion from spot urine for assessment of dietary sodium intake in China. PMID:26895296
Climate change vulnerability for species-Assessing the assessments.
Wheatley, Christopher J; Beale, Colin M; Bradbury, Richard B; Pearce-Higgins, James W; Critchlow, Rob; Thomas, Chris D
2017-09-01
Climate change vulnerability assessments are commonly used to identify species at risk from global climate change, but the wide range of methodologies available makes it difficult for end users, such as conservation practitioners or policymakers, to decide which method to use as a basis for decision-making. In this study, we evaluate whether different assessments consistently assign species to the same risk categories and whether any of the existing methodologies perform well at identifying climate-threatened species. We compare the outputs of 12 climate change vulnerability assessment methodologies, using both real and simulated species, and validate the methods using historic data for British birds and butterflies (i.e. using historical data to assign risks and more recent data for validation). Our results show that the different vulnerability assessment methods are not consistent with one another; different risk categories are assigned for both the real and simulated sets of species. Validation of the different vulnerability assessments suggests that methods incorporating historic trend data into the assessment perform best at predicting distribution trends in subsequent time periods. This study demonstrates that climate change vulnerability assessments should not be used interchangeably due to the poor overall agreement between methods when considering the same species. The results of our validation provide more support for the use of trend-based rather than purely trait-based approaches, although further validation will be required as data become available. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
A hydrostatic weighing method using total lung capacity and a small tank.
Warner, J G; Yeater, R; Sherwood, L; Weber, K
1986-01-01
The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596
A hydrostatic weighing method using total lung capacity and a small tank.
Warner, J G; Yeater, R; Sherwood, L; Weber, K
1986-03-01
The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.
The Arthroscopic Surgical Skill Evaluation Tool (ASSET).
Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T
2013-06-01
Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
DOT National Transportation Integrated Search
2018-01-11
Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...
Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam
ERIC Educational Resources Information Center
Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen
2012-01-01
This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.
Psychometric and cognitive validation of a social capital measurement tool in Peru and Vietnam.
De Silva, Mary J; Harpham, Trudy; Tuan, Tran; Bartolini, Rosario; Penny, Mary E; Huttly, Sharon R
2006-02-01
Social capital is a relatively new concept which has attracted significant attention in recent years. No consensus has yet been reached on how to measure social capital, resulting in a large number of different tools available. While psychometric validation methods such as factor analysis have been used by a few studies to assess the internal validity of some tools, these techniques rely on data already collected by the tool and are therefore not capable of eliciting what the questions are actually measuring. The Young Lives (YL) study includes quantitative measures of caregiver's social capital in four countries (Vietnam, Peru, Ethiopia, and India) using a short version of the Adapted Social Capital Assessment Tool (SASCAT). A range of different psychometric methods including factor analysis were used to evaluate the construct validity of SASCAT in Peru and Vietnam. In addition, qualitative cognitive interviews with 20 respondents from Peru and 24 respondents from Vietnam were conducted to explore what each question is actually measuring. We argue that psychometric validation techniques alone are not sufficient to adequately validate multi-faceted social capital tools for use in different cultural settings. Psychometric techniques show SASCAT to be a valid tool reflecting known constructs and displaying postulated links with other variables. However, results from the cognitive interviews present a more mixed picture with some questions being appropriately interpreted by respondents, and others displaying significant differences between what the researchers intended them to measure and what they actually do. Using evidence from a range of methods of assessing validity has enabled the modification of an existing instrument into a valid and low cost tool designed to measure social capital within larger surveys in Peru and Vietnam, with the potential for use in other developing countries following local piloting and cultural adaptation of the tool.
Kim, SungHwan; Lin, Chien-Wei; Tseng, George C
2016-07-01
Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Validity and inter-observer reliability of subjective hand-arm vibration assessments.
Coenen, Pieter; Formanoy, Margriet; Douwes, Marjolein; Bosch, Tim; de Kraker, Heleen
2014-07-01
Exposure to mechanical vibrations at work (e.g., due to handling powered tools) is a potential occupational risk as it may cause upper extremity complaints. However, reliable and valid assessment methods for vibration exposure at work are lacking. Measuring hand-arm vibration objectively is often difficult and expensive, while often used information provided by manufacturers lacks detail. Therefore, a subjective hand-arm vibration assessment method was tested on validity and inter-observer reliability. In an experimental protocol, sixteen tasks handling powered tools were executed by two workers. Hand-arm vibration was assessed subjectively by 16 observers according to the proposed subjective assessment method. As a gold standard reference, hand-arm vibration was measured objectively using a vibration measurement device. Weighted κ's were calculated to assess validity, intra-class-correlation coefficients (ICCs) were calculated to assess inter-observer reliability. Inter-observer reliability of the subjective assessments depicting the agreement among observers can be expressed by an ICC of 0.708 (0.511-0.873). The validity of the subjective assessments as compared to the gold-standard reference can be expressed by a weighted κ of 0.535 (0.285-0.785). Besides, the percentage of exact agreement of the subjective assessment compared to the objective measurement was relatively low (i.e., 52% of all tasks). This study shows that subjectively assessed hand-arm vibrations are fairly reliable among observers and moderately valid. This assessment method is a first attempt to use subjective risk assessments of hand-arm vibration. Although, this assessment method can benefit from some future improvement, it can be of use in future studies and in field-based ergonomic assessments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Tan, Zhirong; Ouyang, Dongsheng; Chen, Yao; Zhou, Gan; Cao, Shan; Wang, Yicheng; Peng, Xiujuan; Zhou, Honghao
2010-08-01
A sensitive and specific liquid chromatography-electrospray ionization-mass spectrometry (LC-ESI-MS/MS) method has been developed and validated for the identification and quantification of clebopride in human plasma using itopride as an internal standard. The method involves a simple liquid-liquid extraction. The analytes were separated by isocratic gradient elution on a CAPCELL MG-III C(18) (5 microm, 150 mm x 2.1 mm i.d.) column and analyzed in multiple reaction monitoring (MRM) mode with positive electrospray ionization (ESI) interface using the respective [M+H](+) ions, m/z 373.9-->m/z184.0 for clebopride, m/z 359.9-->m/z71.5 for itopride. The method was validated over the concentration range of 69.530-4450.0 pg/ml for clebopride. Within- and between-batch precision (RSD%) was all within 6.83% and accuracy ranged from -8.16 to 1.88%. The LLOQ was 69.530 pg/ml. The extraction recovery was on an average 77% for clebopride. The validated method was used to study the pharmacokinetics profile of clebopride in human plasma after oral administration of clebopride. Copyright 2010. Published by Elsevier B.V.
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
USDA-ARS?s Scientific Manuscript database
The purpose of this study was to develop a Single-Lab Validated Method using high-performance liquid chromatography (HPLC) with different detectors (diode array detector - DAD, fluorescence detector - FLD, and mass spectrometer - MS) for determination of seven B-complex vitamins (B1 - thiamin, B2 – ...
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
NASA Astrophysics Data System (ADS)
Juneja, P.; Harris, E. J.; Evans, P. M.
2014-03-01
Realistic modelling of breast deformation requires the breast tissue to be segmented into fibroglandular and fatty tissue and assigned suitable material properties. There are a number of breast tissue segmentation methods proposed and used in the literature. The purpose of this study was to validate and compare the accuracy of various segmentation methods and to investigate the effect of the tissue distribution on the segmentation accuracy. Computed tomography (CT) data for 24 patients, both in supine and prone positions were segmented into fibroglandular and fatty tissue. The segmentation methods explored were: physical density thresholding; interactive thresholding; fuzzy c-means clustering (FCM) with three classes (FCM3) and four classes (FCM4); and k-means clustering. Validation was done in two-stages: firstly, a new approach, supine-prone validation based on the assumption that the breast composition should appear the same in the supine and prone scans was used. Secondly, outlines from three experts were used for validation. This study found that FCM3 gave the most accurate segmentation of breast tissue from CT data and that the segmentation accuracy is adversely affected by the sparseness of the fibroglandular tissue distribution.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762
Amano, Nobuko; Nakamura, Tomiyo
2018-02-01
The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Šenk, Miroslav; Chèze, Laurence
2010-06-01
Optoelectronic tracking systems are rarely used in 3D studies examining shoulder movements including the scapula. Among the reasons is the important slippage of skin markers with respect to scapula. Methods using electromagnetic tracking devices are validated and frequently applied. Thus, the aim of this study was to develop a new method for in vivo optoelectronic scapular capture dealing with the accepted accuracy issues of validated methods. Eleven arm positions in three anatomical planes were examined using five subjects in static mode. The method was based on local optimisation, and recalculation procedures were made using a set of five scapular surface markers. The scapular rotations derived from the recalculation-based method yielded RMS errors comparable with the frequently used electromagnetic scapular methods (RMS up to 12.6° for 150° arm elevation). The results indicate that the present method can be used under careful considerations for 3D kinematical studies examining different shoulder movements.
78 FR 56718 - Draft Guidance for Industry on Bioanalytical Method Validation; Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
...] Draft Guidance for Industry on Bioanalytical Method Validation; Availability AGENCY: Food and Drug... availability of a draft guidance for industry entitled ``Bioanalytical Method Validation.'' The draft guidance is intended to provide recommendations regarding analytical method development and validation for the...
Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.
2016-01-01
Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Extension of the validation of AOAC Official Method 2005.06 for dc-GTX2,3: interlaboratory study.
Ben-Gigirey, Begoña; Rodríguez-Velasco, María L; Gago-Martínez, Ana
2012-01-01
AOAC Official Method(SM) 2005.06 for the determination of saxitoxin (STX)-group toxins in shellfish by LC with fluorescence detection with precolumn oxidation was previously validated and adopted First Action following a collaborative study. However, the method was not validated for all key STX-group toxins, and procedures to quantify some of them were not provided. With more STX-group toxin standards commercially available and modifications to procedures, it was possible to overcome some of these difficulties. The European Union Reference Laboratory for Marine Biotoxins conducted an interlaboratory exercise to extend AOAC Official Method 2005.06 validation for dc-GTX2,3 and to compile precision data for several STX-group toxins. This paper reports the study design and the results obtained. The performance characteristics for dc-GTX2,3 (intralaboratory and interlaboratory precision, recovery, and theoretical quantification limit) were evaluated. The mean recoveries obtained for dc-GTX2,3 were, in general, low (53.1-58.6%). The RSD for reproducibility (RSD(r)%) for dc-GTX2,3 in all samples ranged from 28.2 to 45.7%, and HorRat values ranged from 1.5 to 2.8. The article also describes a hydrolysis protocol to convert GTX6 to NEO, which has been proven to be useful for the quantification of GTX6 while the GTX6 standard is not available. The performance of the participant laboratories in the application of this method was compared with that obtained from the original collaborative study of the method. Intralaboratory and interlaboratory precision data for several STX-group toxins, including dc-NEO and GTX6, are reported here. This study can be useful for those laboratories determining STX-group toxins to fully implement AOAC Official Method 2005.06 for official paralytic shellfish poisoning control. However the overall quantitative performance obtained with the method was poor for certain toxins.
Assessing the stability of human locomotion: a review of current measures
Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.
2013-01-01
Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062
Carmo, Ana Paula Barbosa do; Borborema, Manoella; Ribeiro, Stephan; De-Oliveira, Ana Cecilia Xavier; Paumgartten, Francisco Jose Roma; Moreira, Davyson de Lima
2017-01-01
Primaquine (PQ) diphosphate is an 8-aminoquinoline antimalarial drug with unique therapeutic properties. It is the only drug that prevents relapses of Plasmodium vivax or Plasmodium ovale infections. In this study, a fast, sensitive, cost-effective, and robust method for the extraction and high-performance liquid chromatography with diode array ultraviolet detection (HPLC-DAD-UV ) analysis of PQ in the blood plasma was developed and validated. After plasma protein precipitation, PQ was obtained by liquid-liquid extraction and analyzed by HPLC-DAD-UV with a modified-silica cyanopropyl column (250mm × 4.6mm i.d. × 5μm) as the stationary phase and a mixture of acetonitrile and 10mM ammonium acetate buffer (pH = 3.80) (45:55) as the mobile phase. The flow rate was 1.0mL·min-1, the oven temperature was 50OC, and absorbance was measured at 264nm. The method was validated for linearity, intra-day and inter-day precision, accuracy, recovery, and robustness. The detection (LOD) and quantification (LOQ) limits were 1.0 and 3.5ng·mL-1, respectively. The method was used to analyze the plasma of female DBA-2 mice treated with 20mg.kg-1 (oral) PQ diphosphate. By combining a simple, low-cost extraction procedure with a sensitive, precise, accurate, and robust method, it was possible to analyze PQ in small volumes of plasma. The new method presents lower LOD and LOQ limits and requires a shorter analysis time and smaller plasma volumes than those of previously reported HPLC methods with DAD-UV detection. The new validated method is suitable for kinetic studies of PQ in small rodents, including mouse models for the study of malaria.
Singh, Sheelendra Pratap; Dwivedi, Nistha; Raju, Kanumuri Siva Rama; Taneja, Isha; Wahajuddin, Mohammad
2016-01-01
United States Environmental Protection Agency has recommended estimating pyrethroids’ risk using cumulative exposure. For cumulative risk assessment, it would be useful to have a bioanalytical method for quantification of one or several pyrethroids simultaneously in a small sample volume to support toxicokinetic studies. Therefore, in the present study, a simple, sensitive and high-throughput ultraperformance liquid chromatography–tandem mass spectrometry method was developed and validated for simultaneous analysis of seven pyrethroids (fenvalerate, fenpropathrin, bifenthrin, lambda-cyhalothrin, cyfluthrin, cypermethrin and deltamethrin) in 100 µL of rat plasma. A simple single-step protein precipitation method was used for the extraction of target compounds. The total chromatographic run time of the method was 5 min. The chromatographic system used a Supelco C18 column and isocratic elution with a mobile phase consisting of methanol and 5 mM ammonium formate in the ratio of 90 : 10 (v/v). Mass spectrometer (API 4000) was operated in multiple reaction monitoring positive-ion mode using the electrospray ionization technique. The calibration curves were linear in the range of 7.8–2,000 ng/mL with correlation coefficients of ≥0.99. All validation parameters such as precision, accuracy, recovery, matrix effect and stability met the acceptance criteria according to the regulatory guidelines. The method was successfully applied to the toxicokinetic study of cypermethrin in rats. To the best of our knowledge, this is the first LC–MS-MS method for the simultaneous analysis of pyrethroids in rat plasma. This validated method with minimal modification can also be utilized for forensic and clinical toxicological applications due to its simplicity, sensitivity and rapidity. PMID:26801239
Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda
2016-01-01
Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980
Gaudin, Valérie
2017-09-01
Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.
Validation of an asthma questionnaire for use in healthcare workers
Delclos, G L; Arif, A A; Aday, L; Carson, A; Lai, D; Lusk, C; Stock, T; Symanski, E; Whitehead, L W; Benavides, F G; Antó, J M
2006-01-01
Background Previous studies have described increased occurrence of asthma among healthcare workers, but to our knowledge there are no validated survey questionnaires with which to study this occupational group. Aims To develop, validate, and refine a new survey instrument on asthma for use in epidemiological studies of healthcare workers. Methods An initial draft questionnaire, designed by a multidisciplinary team, used previously validated questions where possible; the occupational exposure section was developed by updating health services specific chemical lists through hospital walk‐through surveys and review of material safety data sheets. A cross‐sectional validation study was conducted in 118 non‐smoking subjects, who also underwent bronchial challenge testing, an interview with an industrial hygienist, and measurement of specific IgE antibodies to common aeroallergens. Results The final version consisted of 43 main questions in four sections. Time to completion of the questionnaire ranged from 13 to 25 minutes. Test–retest reliability of asthma and allergy items ranged from 75% to 94%, and internal consistency for these items was excellent (Cronbach's α ⩾ 0.86). Against methacholine challenge, an eight item combination of asthma related symptoms had a sensitivity of 71% and specificity of 70%; against a physician diagnosis of asthma, this same combination showed a sensitivity of 79% and specificity of 98%. Agreement between self‐reported exposures and industrial hygienist review was similar to previous studies and only moderate, indicating the need to incorporate more reliable methods of exposure assessment. Against the aerollergen panel, the best combinations of sensitivity and specificity were obtained for a history of allergies to dust, dust mite, and animals. Conclusions Initial evaluation of this new questionnaire indicates good validity and reliability, and further field testing and cross‐validation in a larger healthcare worker population is in progress. The need for development of more reliable occupational exposure assessment methods that go beyond self‐report is underscored. PMID:16497858
Culture Training: Validation Evidence for the Culture Assimilator.
ERIC Educational Resources Information Center
Mitchell, Terence R.; And Others
The culture assimilator, a programed self-instructional approach to culture training, is described and a series of laboratory experiments and field studies validating the culture assimilator are reviewed. These studies show that the culture assimilator is an effective method of decreasing some of the stress experienced when one works with people…
Bayesian data analysis in observational comparative effectiveness research: rationale and examples.
Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M
2013-11-01
Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.
Gross, S; Janssen, S W J; de Vries, B; Terao, E; Daas, A; Buchheit, K-H
2009-10-01
The European Pharmacopoeia (Ph. Eur.) monograph Human tetanus immunoglobulin (0398) gives a clear outline of the in vivo assay to be performed to determine the potency of human tetanus immunoglobulins during their development. Furthermore, it states that an in vitro method shall be validated for the batch potency estimation. Since no further guidance is given on the in vitro assay, every control laboratory concerned is free to design and validate an in-house method. At the moment there is no agreed in vitro method available. The aim of this study was to validate and compare 2 alternative in vitro assays, i.e. an enzyme-linked immunoassay (EIA) and a toxoid inhibition assay (TIA), through an international collaborative study, in view of their eventual inclusion into the Ph. Eur.. The study was run in the framework of the Biological Standardisation Programme (BSP), under the aegis of the European Commission and the Council of Europe. The collaborative study reported here involved 21 laboratories (public and industry) from 15 countries. Initially, 3 samples with low, medium and high potencies were tested by EIA and TIA. Results showed good reproducibility and repeatability of the 2 in vitro methods. The correlation of the data with the in vivo potency assigned by the manufacturers however appeared initially poor for high potency samples. Thorough re-examination of the data showed that the in vivo potencies assigned by the manufacturers had to be corrected: one for potency loss at the time of in vitro testing and one because of a reporting error. After these corrections the values obtained by in vivo and in vitro methods were in close agreement. A supplementary collaborative work was carried out to validate the 2 methods for immunoglobulin products with high potencies. Eight laboratories (public and industry) took part in this additional study to test 3 samples with medium and high potencies by EIA and TIA. Results confirmed that the 2 alternative methods are comparable in terms of assay repeatability, precision and reproducibility. In all laboratories, both methods discriminated between the low, medium and high potency samples. Analysis of the data collected in this study showed a good correlation between EIA and TIA potency estimates as well as a close agreement between values obtained by in vitro and in vivo methods. The study demonstrated that EIA and TIA are suitable quality control methods for polyclonal human tetanus immunoglobulin, which can be standardised in a quality control laboratory using a quality assurance system. Consequently, the Ph. Eur. Group of Experts 6B on Human Blood and Blood products decided in April 2009 to include both methods as examples in the Ph. Eur. monograph 0398 on Human Tetanus immunoglobulin.
Koller, Ingrid; Levenson, Michael R.; Glück, Judith
2017-01-01
The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777
Sghaier, Lilia; Cordella, Christophe B Y; Rutledge, Douglas N; Watiez, Mickaël; Breton, Sylvie; Sassiat, Patrick; Thiebaut, Didier; Vial, Jérôme
2016-05-01
Due to lipid oxidation, off-flavors, characterized by a fishy odor, are emitted during the heating of rapeseed oil in a fryer and affect the flavor of rapeseed oil even at low concentrations. Thus, there is a need for analytical methods to identify and quantify these products. To study the headspace composition of degraded rapeseed oil, and more specifically the compounds responsible for the fishy odor, a headspace trap gas chromatography with mass spectrometry method was developed and validated. Six volatile compounds formed during the degradation of rapeseed oil were quantified: 1-penten-3-one, (Z)-4-heptenal, hexanal, nonanal, (E,E)-heptadienal, and (E)-2-heptenal. Validation using accuracy profiles allowed us to determine the valid ranges of concentrations for each compound, with acceptance limits of 40% and tolerance limits of 80%. This method was then successfully applied to real samples of degraded oils. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Peer-Driven Justice: Development and Validation of the Teen Court Peer Influence Scale
ERIC Educational Resources Information Center
Smith, Scott; Chonody, Jill M.
2010-01-01
The authors report a validation study of the Teen Court Peer Influence Scale (TCPIS), a newly developed scale, to examine its factor structure, reliability, and evidence of validity. Methods: The scale was disseminated to 202 participants in six teen courts in the state of Florida, and the authors conducted exploratory factor analyses. Content…
An Evaluation of the Validity and Reliability of a Food Behavior Checklist Modified for Children
ERIC Educational Resources Information Center
Branscum, Paul; Sharma, Manoj; Kaye, Gail; Succop, Paul
2010-01-01
Objective: The objective of this study was to report the construct validity and internal consistency reliability of the Food Behavior Checklist modified for children (FBC-MC), with low-income, Youth Expanded Food and Nutrition Education Program (EFNEP)-eligible children. Methods: Using a cross-sectional research design, construct validity was…
ERIC Educational Resources Information Center
Wittich, Christopher M.; Pawlina, Wojciech; Drake, Richard L.; Szostek, Jason H.; Reed, Darcy A.; Lachman, Nirusha; McBride, Jennifer M.; Mandrekar, Jayawant N.; Beckman, Thomas J.
2013-01-01
Improving professional attitudes and behaviors requires critical self reflection. Research on reflection is necessary to understand professionalism among medical students. The aims of this prospective validation study at the Mayo Medical School and Cleveland Clinic Lerner College of Medicine were: (1) to develop and validate a new instrument for…
Badran, Hani; Pluye, Pierre; Grad, Roland
2017-03-14
The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014). This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. ©Hani Badran, Pierre Pluye, Roland Grad. Originally published in JMIR Medical Education (http://mededu.jmir.org), 14.03.2017.
Kogo, Haruki; Murata, Jun; Murata, Shin; Higashi, Toshio
2017-01-01
This study examined the validity of a practical evaluation method for pitting edema by comparing it to other methods, including circumference measurements and ultrasound image measurements. Fifty-one patients (102 legs) from a convalescent ward in Maruyama Hospital were recruited for study 1, and 47 patients (94 legs) from a convalescent ward in Morinaga Hospital were recruited for study 2. The relationship between the depth of the surface imprint and circumferential measurements, as well as the relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue on an ultrasonogram, were analyzed using a Spearman correlation coefficient by rank. There was no significant relationship between the surface imprint depth and circumferential measurements. However, there was a significant relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue as measured on an ultrasonogram (correlation coefficient 0.736). Our findings suggest that our novel evaluation method for pitting edema, based on a measurement of the surface imprint depth, is both valid and useful.
Andrade, Susan E.; Harrold, Leslie R.; Tjia, Jennifer; Cutrona, Sarah L.; Saczynski, Jane S.; Dodd, Katherine S.; Goldberg, Robert J.; Gurwitz, Jerry H.
2012-01-01
Purpose To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data. Methods PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria. Results A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater. Conclusions The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest. PMID:22262598
Thiex, Nancy
2016-01-01
A previously validated method for the determination of nitrogen release patterns of slow- and controlled-release fertilizers (SRFs and CRFs, respectively) was submitted to the Expert Review Panel (ERP) for Fertilizers for consideration of First Action Official Method(SM) status. The ERP evaluated the single-laboratory validation results and recommended the method for First Action Official Method status and provided recommendations for achieving Final Action. The 180 day soil incubation-column leaching technique was demonstrated to be a robust and reliable method for characterizing N release patterns from SRFs and CRFs. The method was reproducible, and the results were only slightly affected by variations in environmental factors such as microbial activity, soil moisture, temperature, and texture. The release of P and K were also studied, but at fewer replications than for N. Optimization experiments on the accelerated 74 h extraction method indicated that temperature was the only factor found to substantially influence nutrient-release rates from the materials studied, and an optimized extraction profile was established as follows: 2 h at 25°C, 2 h at 50°C, 20 h at 55°C, and 50 h at 60°C.
Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study.
Ogilvie, Emily; McCrudden, Matthew T
2017-09-01
An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents viewed (a) the ESDM goals as appropriate for their children, (b) the intervention procedures as acceptable and appropriate, and (c) whether changes in their children's behavior was practically significant. Parents of four children who participated in the ESDM completed the TARF-R questionnaire and participated in a semi-structured interview. Both data sets indicated that parents rated their experiences with the ESDM positively and rated it as socially-valid. The findings indicated that what was implemented in the intervention is complemented by how it was implemented and by whom.
Fatihah, Fadil; Ng, Boon Koon; Hazwanie, Husin; Norimah, A Karim; Shanita, Safii Nik; Ruzita, Abd Talib; Poh, Bee Koon
2015-01-01
INTRODUCTION This study aimed to develop and validate a food frequency questionnaire (FFQ) to assess habitual diets of multi-ethnic Malaysian children aged 7–12 years. METHODS A total of 236 primary school children participated in the development of the FFQ and 209 subjects participated in the validation study, with a subsample of 30 subjects participating in the reproducibility study. The FFQ, consisting of 94 food items from 12 food groups, was compared with a three-day dietary record (3DR) as the reference method. The reproducibility of the FFQ was assessed through repeat administration (FFQ2), seven days after the first administration (FFQ1). RESULTS The results of the validation study demonstrated good acceptance of the FFQ. Mean intake of macronutrients in FFQ1 and 3DR correlated well, although the FFQ intake data tended to be higher. Cross-classification of nutrient intake between the two methods showed that < 7% of subjects were grossly misclassified. Moderate correlations noted between the two methods ranged from r = 0.310 (p < 0.001) for fat to r = 0.497 (p < 0.001) for energy. The reproducibility of the FFQ, as assessed by Cronbach’s alpha, ranged from 0.61 (protein) to 0.70 (energy, carbohydrates and fat). Spearman’s correlations between FFQ1 and FFQ2 ranged from rho = 0.333 (p = 0.072) for protein to rho = 0.479 (p < 0.01) for fat. CONCLUSION These findings indicate that the FFQ is valid and reliable for measuring the average intake of energy and macronutrients in a population of multi-ethnic children aged 7–12 years in Malaysia. PMID:26702165
Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets
Raju, V.; Murthy, K. V. R.
2011-01-01
The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865
NASA Astrophysics Data System (ADS)
Yugatama, A.; Rohmani, S.; Dewangga, A.
2018-03-01
Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.
A Validation Study of the Adolescent Dissociative Experiences Scale
ERIC Educational Resources Information Center
Keck Seeley, Susan. M.; Perosa, Sandra, L.; Perosa, Linda, M.
2004-01-01
Objective: The purpose of this study was to further the validation process of the Adolescent Dissociative Experiences Scale (A-DES). In this study, a 6-item Likert response format with descriptors was used when responding to the A-DES rather than the 11-item response format used in the original A-DES. Method: The internal reliability and construct…
2012-01-01
Background Technological advances have enabled the widespread use of video cases via web-streaming and online download as an educational medium. The use of real subjects to demonstrate acute pathology should aid the education of health care professionals. However, the methodology by which this effect may be tested is not clear. Methods We undertook a literature review of major databases, found relevant articles relevant to using patient video cases as educational interventions, extracted the methodologies used and assessed these methods for internal and construct validity. Results A review of 2532 abstracts revealed 23 studies meeting the inclusion criteria and a final review of 18 of relevance. Medical students were the most commonly studied group (10 articles) with a spread of learner satisfaction, knowledge and behaviour tested. Only two of the studies fulfilled defined criteria on achieving internal and construct validity. The heterogeneity of articles meant it was not possible to perform any meta-analysis. Conclusions Previous studies have not well classified which facet of training or educational outcome the study is aiming to explore and had poor internal and construct validity. Future research should aim to validate a particular outcome measure, preferably by reproducing previous work rather than adopting new methods. In particular cognitive processing enhancement, demonstrated in a number of the medical student studies, should be tested at a postgraduate level. PMID:23256787
Mohammadifard, Noushin; Sajjadi, Firouzeh; Maghroun, Maryam; Alikhasi, Hassan; Nilforoushzadeh, Farzaneh; Sarrafzadegan, Nizal
2015-03-01
Dietary assessment is the first step of dietary modification in community-based interventional programs. This study was performed to validate a simple food frequency questionnaire (SFFQ) for assessment of selected food items in epidemiological studies with a large sample size as well as community trails. This validation study was carried out on 264 healthy adults aged ≥ 41 years old living in 3 district central of Iran, including Isfahan, Najafabad, and Arak. Selected food intakes were assessed using a 48-item food frequency questionnaire (FFQ). The FFQ was interviewer-administered, which was completed twice; at the beginning of the study and 2 weeks thereafter. The validity of this SFFQ was examined compared to estimated amount by single 24 h dietary recall and 2 days dietary record. Validation of the FFQ was determined using Spearman correlation coefficients between daily frequency consumption of food groups as assessed by the FFQ and the qualitative amount of daily food groups intake accessed by dietary reference method was applied to evaluate validity. Intraclass correlation coefficients (ICC) were used to determine the reproducibility. Spearman correlation coefficient between the estimated amount of food groups intake by examined and reference methods ranged from 0.105 (P = 0.378) in pickles to 0.48 (P < 0.001) in plant protein. ICC for reproducibility of FFQ were between 0.47-0.69 in different food groups (P < 0.001). The designed SFFQ has a good relative validity and reproducibility for assessment of selected food groups intake. Thus, it can serve as a valid tool in epidemiological studies and clinical trial with large participants.
Zhang, Meng-Qi; Jia, Jing-Ying; Lu, Chuan; Liu, Gang-Yi; Yu, Cheng-Yin; Gui, Yu-Zhou; Liu, Yun; Liu, Yan-Mei; Wang, Wei; Li, Shui-Jun; Yu, Chen
2010-06-01
A simple, reliable and sensitive liquid chromatography-isotope dilution mass spectrometry (LC-ID/MS) was developed and validated for quantification of olanzapine in human plasma. Plasma samples (50 microL) were extracted with tert-butyl methyl ether and isotope-labeled internal standard (olanzapine-D3) was used. The chromatographic separation was performed on XBridge Shield RP 18 (100 mm x 2.1 mm, 3.5 microm, Waters). An isocratic program was used at a flow rate of 0.4 m x min(-1) with mobile phase consisting of acetonitrile and ammonium buffer (pH 8). The protonated ions of analytes were detected in positive ionization by multiple reactions monitoring (MRM) mode. The plasma method, with a lower limit of quantification (LLOQ) of 0.1 ng x mL(-1), demonstrated good linearity over a range of 0.1 - 30 ng x mL(-1) of olanzapine. Specificity, linearity, accuracy, precision, recovery, matrix effect and stability were evaluated during method validation. The validated method was successfully applied to analyzing human plasma samples in bioavailability study.
Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.
Rhudy, Jamie L; France, Christopher R
2011-07-01
The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.
BAGLIO, MICHELLE L.; BAXTER, SUZANNE DOMEL; GUINN, CAROLINE H.; THOMPSON, WILLIAM O.; SHAFFER, NICOLE M.; FRYE, FRANCESCA H. A.
2005-01-01
This article (a) provides a general review of interobserver reliability (IOR) and (b) describes our method for assessing IOR for items and amounts consumed during school meals for a series of studies regarding the accuracy of fourth-grade children's dietary recalls validated with direct observation of school meals. A widely used validation method for dietary assessment is direct observation of meals. Although many studies utilize several people to conduct direct observations, few published studies indicate whether IOR was assessed. Assessment of IOR is necessary to determine that the information collected does not depend on who conducted the observation. Two strengths of our method for assessing IOR are that IOR was assessed regularly throughout the data collection period and that IOR was assessed for foods at the item and amount level instead of at the nutrient level. Adequate agreement among observers is essential to the reasoning behind using observation as a validation tool. Readers are encouraged to question the results of studies that fail to mention and/or to include the results for assessment of IOR when multiple people have conducted observations. PMID:15354155
NASA Technical Reports Server (NTRS)
Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl
2017-01-01
The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.
Hartnell, R E; Stockley, L; Keay, W; Rosec, J-P; Hervio-Heath, D; Van den Berg, H; Leoni, F; Ottaviani, D; Henigman, U; Denayer, S; Serbruyns, B; Georgsson, F; Krumova-Valcheva, G; Gyurova, E; Blanco, C; Copin, S; Strauch, E; Wieczorek, K; Lopatek, M; Britova, A; Hardouin, G; Lombard, B; In't Veld, P; Leclercq, A; Baker-Austin, C
2018-02-10
Globally, vibrios represent an important and well-established group of bacterial foodborne pathogens. The European Commission (EC) mandated the Comite de European Normalisation (CEN) to undertake work to provide validation data for 15 methods in microbiology to support EC legislation. As part of this mandated work programme, merging of ISO/TS 21872-1:2007, which specifies a horizontal method for the detection of V. parahaemolyticus and V. cholerae, and ISO/TS 21872-2:2007, a similar horizontal method for the detection of potentially pathogenic vibrios other than V. cholerae and V. parahaemolyticus was proposed. Both parts of ISO/TS 21872 utilized classical culture-based isolation techniques coupled with biochemical confirmation steps. The work also considered simplification of the biochemical confirmation steps. In addition, because of advances in molecular based methods for identification of human pathogenic Vibrio spp. classical and real-time PCR options were also included within the scope of the validation. These considerations formed the basis of a multi-laboratory validation study with the aim of improving the precision of this ISO technical specification and providing a single ISO standard method to enable detection of these important foodborne Vibrio spp.. To achieve this aim, an international validation study involving 13 laboratories from 9 countries in Europe was conducted in 2013. The results of this validation have enabled integration of the two existing technical specifications targeting the detection of the major foodborne Vibrio spp., simplification of the suite of recommended biochemical identification tests and the introduction of molecular procedures that provide both species level identification and discrimination of putatively pathogenic strains of V. parahaemolyticus by the determination of the presence of theromostable direct and direct related haemolysins. The method performance characteristics generated in this have been included in revised international standard, ISO 21872:2017, published in July 2017. Copyright © 2018. Published by Elsevier B.V.
Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan
2012-01-01
This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024
ERIC Educational Resources Information Center
Park, Namgyoo K.; Chun, Monica Youngshin; Lee, Jinju
2016-01-01
Compared to the significant development of creativity studies, individual creativity research has not reached a meaningful consensus regarding the most valid and reliable method for assessing individual creativity. This study revisited 2 of the most popular methods for assessing individual creativity: subjective and objective methods. This study…
Methodological considerations of the GRADE method.
Malmivaara, Antti
2015-02-01
The GRADE method (Grading of Recommendations, Assessment, Development, and Evaluation) provides a tool for rating the quality of evidence for systematic reviews and clinical guidelines. This article aims to analyse conceptually how well grounded the GRADE method is, and to suggest improvements. The eight criteria for rating the quality of evidence as proposed by GRADE are here analysed in terms of each criterion's potential to provide valid information for grading evidence. Secondly, the GRADE method of allocating weights and summarizing the values of the criteria is considered. It is concluded that three GRADE criteria have an appropriate conceptual basis to be used as indicators of confidence in research evidence in systematic reviews: internal validity of a study, consistency of the findings, and publication bias. In network meta-analyses, the indirectness of evidence may also be considered. It is here proposed that the grade for the internal validity of a study could in some instances justifiably decrease the overall grade by three grades (e.g. from high to very low) instead of the up to two grade decrease, as suggested by the GRADE method.
Xiong, Shan; Deng, Zhipeng; Sun, Peilu; Mu, Yanling; Xue, Mingxing
2017-11-01
Osimertinib is a new-generation epidermal growth factor inhibitor for the treatment of non-small cell lung cancer. In the present study, a rapid and sensitive LC with tandem MS method was developed and validated for the determination of osimertinib in rat plasma. Chromatographic separation was carried out on a C18 column using acetonitrile and water containing 0.1% formic acid. The assay was validated over a concentration range of 1.0-1000 ng/mL for osimertinib, with a lower LOQ of 1.0 ng/mL. The intra- and interday accuracy values for osimertinib ranged from 92.66 to 101.50% and from 97.08 to 99.15%, respectively, and the intra- and interday precision values for osimertinib ranged from 6.25 to 10.34% and from 3.43 to 10.44%, respectively. The method was successfully applied in a pharmacokinetic study of osimertinib after oral administration of osimertinib (4.5 mg/kg) to rats.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
NASA Astrophysics Data System (ADS)
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.
Humble, Emily; Thorne, Michael A S; Forcada, Jaume; Hoffman, Joseph I
2016-08-26
Single nucleotide polymorphism (SNP) discovery is an important goal of many studies. However, the number of 'putative' SNPs discovered from a sequence resource may not provide a reliable indication of the number that will successfully validate with a given genotyping technology. For this it may be necessary to account for factors such as the method used for SNP discovery and the type of sequence data from which it originates, suitability of the SNP flanking sequences for probe design, and genomic context. To explore the relative importance of these and other factors, we used Illumina sequencing to augment an existing Roche 454 transcriptome assembly for the Antarctic fur seal (Arctocephalus gazella). We then mapped the raw Illumina reads to the new hybrid transcriptome using BWA and BOWTIE2 before calling SNPs with GATK. The resulting markers were pooled with two existing sets of SNPs called from the original 454 assembly using NEWBLER and SWAP454. Finally, we explored the extent to which SNPs discovered using these four methods overlapped and predicted the corresponding validation outcomes for both Illumina Infinium iSelect HD and Affymetrix Axiom arrays. Collating markers across all discovery methods resulted in a global list of 34,718 SNPs. However, concordance between the methods was surprisingly poor, with only 51.0 % of SNPs being discovered by more than one method and 13.5 % being called from both the 454 and Illumina datasets. Using a predictive modeling approach, we could also show that SNPs called from the Illumina data were on average more likely to successfully validate, as were SNPs called by more than one method. Above and beyond this pattern, predicted validation outcomes were also consistently better for Affymetrix Axiom arrays. Our results suggest that focusing on SNPs called by more than one method could potentially improve validation outcomes. They also highlight possible differences between alternative genotyping technologies that could be explored in future studies of non-model organisms.
Center of pressure based segment inertial parameters validation
Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane
2017-01-01
By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090
Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires.
Leggett, Laura E; Khadaroo, Rachel G; Holroyd-Leduc, Jayna; Lorenzetti, Diane L; Hanson, Heather; Wagg, Adrian; Padwal, Raj; Clement, Fiona
2016-03-01
A variety of methods may be used to obtain costing data. Although administrative data are most commonly used, the data available in these datasets are often limited. An alternative method of obtaining costing is through self-reported questionnaires. Currently, there are no systematic reviews that summarize self-reported resource utilization instruments from the published literature.The aim of the study was to identify validated self-report healthcare resource use instruments and to map their attributes.A systematic review was conducted. The search identified articles using terms like "healthcare utilization" and "questionnaire." All abstracts and full texts were considered in duplicate. For inclusion, studies had to assess the validity of a self-reported resource use questionnaire, to report original data, include adult populations, and the questionnaire had to be publically available. Data such as type of resource utilization assessed by each questionnaire, and validation findings were extracted from each study.In all, 2343 unique citations were retrieved; 2297 were excluded during abstract review. Forty-six studies were reviewed in full text, and 15 studies were included in this systematic review. Six assessed resource utilization of patients with chronic conditions; 5 assessed mental health service utilization; 3 assessed resource utilization by a general population; and 1 assessed utilization in older populations. The most frequently measured resources included visits to general practitioners and inpatient stays; nonmedical resources were least frequently measured. Self-reported questionnaires on resource utilization had good agreement with administrative data, although, visits to general practitioners, outpatient days, and nurse visits had poorer agreement.Self-reported questionnaires are a valid method of collecting data on healthcare resource utilization.
Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William
2018-01-01
Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203
Bodiwala, Kunjan Bharatkumar; Shah, Shailesh; Thakor, Jeenal; Marolia, Bhavin; Prajapati, Pintu
2016-11-01
A rapid, sensitive, and stability-indicating high-performance thin-layer chromatographic method was developed and validated to study degradation kinetics of Alogliptin benzoate (ALG) in an alkaline medium. ALG was degraded under acidic, alkaline, oxidative, and thermal stress conditions. The degraded samples were chromatographed on silica gel 60F254-TLC plates, developed using a quaternary-solvent system (chloroform-methanol-ethyl acetate-triethyl amine, 9+1+1+0.5, v/v/v/v), and scanned at 278 nm. The developed method was validated per International Conference on Harmonization guidelines using validation parameters such as specificity, linearity and range, precision, accuracy, LOD, and LOQ. The linearity range for ALG was 100-500 ng/band (correlation coefficient = 0.9997) with an average recovery of 99.47%. The LOD and LOQ for ALG were 9.8 and 32.7 ng/band, respectively. The developed method was successfully applied for the quantitative estimation of ALG in its synthetic mixture with common excipients. Degradation kinetics of ALG in an alkaline medium was studied by degrading it under three different temperatures and three different concentrations of alkali. Degradation of ALG in the alkaline medium was found to follow first-order kinetics. Contour plots have been generated to predict degradation rate constant, half-life, and shelf life of ALG in various combinations of temperature and concentration of alkali using Design Expert software.
NASA Astrophysics Data System (ADS)
Trandafir, Laura; Alexandru, Mioara; Constantin, Mihai; Ioniţă, Anca; Zorilă, Florina; Moise, Valentin
2012-09-01
EN ISO 11137 established regulations for setting or substantiating the dose for achieving the desired sterility assurance level. The validation studies can be designed in particular for different types of products. Each product needs distinct protocols for bioburden determination and sterility testing. The Microbiological Laboratory from Irradiation Processing Center (IRASM) deals with different types of products, mainly for the VDmax25 method. When it comes to microbiological evaluation the most challenging was cotton gauze. A special situation for establishing the sterilization validation method appears in cases of cotton packed in large quantities. The VDmax25 method cannot be applied for items with average bioburden more than 1000 CFU/pack, irrespective of the weight of the package. This is a method limitation and implies increased costs for the manufacturer when choosing other methods. For microbiological tests, culture condition should be selected in both cases of the bioburden and sterility testing. Details about choosing criteria are given.
Floriani, Gisele; Gasparetto, João Cleverson; Pontarolo, Roberto; Gonçalves, Alan Guilherme
2014-02-01
Here, an HPLC-DAD method was developed and validated for simultaneous determination of cocaine, two cocaine degradation products (benzoylecgonine and benzoic acid), and the main adulterants found in products based on cocaine (caffeine, lidocaine, phenacetin, benzocaine and diltiazem). The new method was developed and validated using an XBridge C18 4.6mm×250mm, 5μm particle size column maintained at 60°C. The mobile phase consisted of a gradient of acetonitrile and ammonium formate 0.05M - pH 3.1, eluted at 1.0mL/min. The volume of injection was 10μL and the DAD detector was set at 274nm. Method validation assays demonstrated suitable sensitivity, selectivity, linearity, precision and accuracy. For selectivity assay, a MS detection system could be directly adapted to the method without the need of any change in the chromatographic conditions. The robustness study indicated that the flow rate, temperature and pH of the mobile phase are critical parameters and should not be changed considering the conditions herein determined. The new method was then successfully applied for determining cocaine, benzoylecgonine, benzoic acid, caffeine, lidocaine, phenacetin, benzocaine and diltiazem in 115 samples, seized in Brazil (2007-2012), which consisted of cocaine paste, cocaine base and salt cocaine samples. This study revealed cocaine contents that ranged from undetectable to 97.2%, with 97 samples presenting at least one of the degradation products or adulterants here evaluated. All of the studied degradation products and adulterants were observed among the seized samples, justifying the application of the method, which can be used as a screening and quantification tool in forensic analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun
2012-01-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050
Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun
2012-02-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
Validation of a Salmonella loop-mediated isothermal amplification assay in animal food.
Domesle, Kelly J; Yang, Qianru; Hammack, Thomas S; Ge, Beilei
2018-01-02
Loop-mediated isothermal amplification (LAMP) has emerged as a promising alternative to PCR for pathogen detection in food testing and clinical diagnostics. This study aimed to validate a Salmonella LAMP method run on both turbidimetry (LAMP I) and fluorescence (LAMP II) platforms in representative animal food commodities. The U.S. Food and Drug Administration (FDA)'s culture-based Bacteriological Analytical Manual (BAM) method was used as the reference method and a real-time quantitative PCR (qPCR) assay was also performed. The method comparison study followed the FDA's microbiological methods validation guidelines, which align well with those from the AOAC International and ISO. Both LAMP assays were 100% specific among 300 strains (247 Salmonella of 185 serovars and 53 non-Salmonella) tested. The detection limits ranged from 1.3 to 28 cells for six Salmonella strains of various serovars. Six commodities consisting of four animal feed items (cattle feed, chicken feed, horse feed, and swine feed) and two pet food items (dry cat food and dry dog food) all yielded satisfactory results. Compared to the BAM method, the relative levels of detection (RLODs) for LAMP I ranged from 0.317 to 1 with a combined value of 0.610, while those for LAMP II ranged from 0.394 to 1.152 with a combined value of 0.783, which all fell within the acceptability limit (2.5) for an unpaired study. This also suggests that LAMP was more sensitive than the BAM method at detecting low-level Salmonella contamination in animal food and results were available 3days sooner. The performance of LAMP on both platforms was comparable to that of qPCR but notably faster, particularly LAMP II. Given the importance of Salmonella in animal food safety, the LAMP assays validated in this study holds great promise as a rapid, reliable, and robust method for routine screening of Salmonella in these commodities. Published by Elsevier B.V.
Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth
ERIC Educational Resources Information Center
Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás
2015-01-01
Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…
Validation of a Teachers' Achievement Goal Instrument for Teaching Physical Education
ERIC Educational Resources Information Center
Wang, Jian; Shen, Bo; Luo, Xiaobin; Hu, Qingshan; Garn, Alex C.
2018-01-01
Purpose: Using Butler's teacher achievement goal orientation as a conceptual framework, we developed this study to validate a teachers' achievement goal instrument for teaching physical education. Methods: A sample of 322 Chinese physical education teachers participated in this study and completed measures of achievement goal orientations and job…
Using Social Network Methods to Study School Leadership
ERIC Educational Resources Information Center
Pitts, Virginia M.; Spillane, James P.
2009-01-01
Social network analysis is increasingly used in the study of policy implementation and school leadership. A key question that remains is that of instrument validity--that is, the question of whether these social network survey instruments measure what they purport to measure. In this paper, we describe our work to examine the validity of the…
A Construct Validity Study of Clinical Competence: A Multitrait Multimethod Matrix Approach
ERIC Educational Resources Information Center
Baig, Lubna; Violato, Claudio; Crutcher, Rodney
2010-01-01
Introduction: The purpose of the study was to adduce evidence for estimating the construct validity of clinical competence measured through assessment instruments used for high-stakes examinations. Methods: Thirty-nine international physicians (mean age = 41 + 6.5 y) participated in high-stakes examination and 3-month supervised clinical practice…
Validity of Adult Retrospective Reports of Adverse Childhood Experiences: Review of the Evidence
ERIC Educational Resources Information Center
Hardt, Jochen; Rutter, Michael
2004-01-01
Background: Influential studies have cast doubt on the validity of retrospective reports by adults of their own adverse experiences in childhood. Accordingly, many researchers view retrospective reports with scepticism. Method: A computer-based search, supplemented by hand searches, was used to identify studies reported between 1980 and 2001 in…
ERIC Educational Resources Information Center
Lindle, Jane Clark; Stalion, Nancy; Young, Lu
2005-01-01
Kentucky's accountability system includes a school-processes audit known as Standards and Indicators for School Improvement (SISI), which is in a nascent stage of validation. Content validity methods include comparison to instruments measuring similar constructs as well as other techniques such as job analysis. This study used a two-phase process…
ERIC Educational Resources Information Center
Wijnen-Meijer, M.; Van der Schaaf, M.; Booij, E.; Harendza, S.; Boscardin, C.; Wijngaarden, J. Van; Ten Cate, Th. J.
2013-01-01
There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks…
ERIC Educational Resources Information Center
Dehghan, Mahshid; Lopez Jaramillo, Patricio; Duenas, Ruby; Anaya, Lilliam Lima; Garcia, Ronald G.; Zhang, Xiaohe; Islam, Shofiqul; Merchant, Anwar T.
2012-01-01
Objective: To validate a food frequency questionnaire (FFQ) against multiple 24-hour dietary recalls (DRs) that could be used for Colombian adults. Methods: A convenience sample of 219 individuals participated in the study. The validity of the FFQ was evaluated against multiple DRs. Four dietary recalls were collected during the year, and an FFQ…
ERIC Educational Resources Information Center
Rossi, Robert Joseph
Methods drawn from four logical theories associated with studies of inductive processes are applied to the assessment and evaluation of experimental episode construct validity. It is shown that this application provides for estimates of episode informativeness with respect to the person examined in terms of the construct and to the construct…
Lifelong Learning Competence Scale (LLLCS): The Study of Validity and Reliability
ERIC Educational Resources Information Center
Uzunboylu, Huseyin; Hursen, Cigdem
2011-01-01
In this research our aim is to develop a scale for lifelong learning competences and investigate the validity and the reliability of the structure of the scale. The participants of this research are 300 secondary school teachers who are randomly selected. The findings on the scale's validity of the structure are computed by the method of factor…
Postcraniometric sex and ancestry estimation in South Africa: a validation study.
Liebenberg, Leandi; Krüger, Gabriele C; L'Abbé, Ericka N; Stull, Kyra E
2018-05-24
With the acceptance of the Daubert criteria as the standards for best practice in forensic anthropological research, more emphasis is being placed on the validation of published methods. Methods, both traditional and novel, need to be validated, adjusted, and refined for optimal performance within forensic anthropological analyses. Recently, a custom postcranial database of modern South Africans was created for use in Fordisc 3.1. Classification accuracies of up to 85% for ancestry estimation and 98% for sex estimation were achieved using a multivariate approach. To measure the external validity and report more realistic performance statistics, an independent sample was tested. The postcrania from 180 black, white, and colored South Africans were measured and classified using the custom postcranial database. A decrease in accuracy was observed for both ancestry estimation (79%) and sex estimation (95%) of the validation sample. When incorporating both sex and ancestry simultaneously, the method achieved 70% accuracy, and 79% accuracy when sex-specific ancestry analyses were run. Classification matrices revealed that postcrania were more likely to misclassify as a result of ancestry rather than sex. While both sex and ancestry influence the size of an individual, sex differences are more marked in the postcranial skeleton and are therefore easier to identify. The external validity of the postcranial database was verified and therefore shown to be a useful tool for forensic casework in South Africa. While the classification rates were slightly lower than the original method, this is expected when a method is generalized.
Sjögren, P; Ordell, S; Halling, A
2003-12-01
The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.
Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.
Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam
2015-05-01
Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.
de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter
2017-11-01
Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
2013-01-01
Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future. PMID:23522349
Crump, R Trafford; Lau, Ryan; Cox, Elizabeth; Currie, Gillian; Panepinto, Julie
2018-06-22
Measuring adolescents' preferences for health states can play an important role in evaluating the delivery of pediatric healthcare. However, formal evaluation of the common direct preference elicitation methods for health states has not been done with adolescents. Therefore, the purpose of this study is to test how these methods perform in terms of their feasibility, reliability, and validity for measuring health state preferences in adolescents. This study used a web-based survey of adolescents, 18 years of age or younger, living in the United States. The survey included four health states, each comprised of six attributes. Preferences for these health states were elicited using the visual analogue scale, time trade-off, and standard gamble. The feasibility, test-retest reliability, and construct validity of each of these preference elicitation methods were tested and compared. A total of 144 participants were included in this study. Using a web-based survey format to elicit preferences for health states from adolescents was feasible. A majority of participants completed all three elicitation methods, ranked those methods as being easy, with very few requiring assistance from someone else. However, all three elicitation methods demonstrated weak test-retest reliability, with Kendall's tau-a values ranging from 0.204 to 0.402. Similarly, all three methods demonstrated poor construct validity, with 9-50% of all rankings aligning with our expectations. There were no significant differences across age groups. Using a web-based survey format to elicit preferences for health states from adolescents is feasible. However, the reliability and construct validity of the methods used to elicit these preferences when using this survey format are poor. Further research into the effects of a web-based survey approach to eliciting preferences for health states from adolescents is needed before health services researchers or pediatric clinicians widely employ these methods.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla
2018-05-01
Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.
The validation of Huffaz Intelligence Test (HIT)
NASA Astrophysics Data System (ADS)
Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain
2017-08-01
In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
Simulation-based training for prostate surgery.
Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-10-01
To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten
2017-05-01
This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.
Stern, K I; Malkova, T L
The objective of the present study was the development and validation of sibutramine demethylated derivatives, desmethyl sibutramine and didesmethyl sibutramine. Gas-liquid chromatography with the flame ionization detector was used for the quantitative determination of the above substances in dietary supplements. The conditions for the chromatographic determination of the analytes in the presence of the reference standard, methyl stearate, were proposed allowing to achieve the efficient separation. The method has the necessary sensitivity, specificity, linearity, accuracy, and precision (on the intra-day and inter-day basis) which suggests its good validation characteristics. The proposed method can be employed in the analytical laboratories for the quantitative determination of sibutramine derivatives in biologically active dietary supplements.
Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko
2017-07-01
Identification of human semen is indispensable for the investigation of sexual assaults. Fluorescence staining methods using commercial kits, such as the series of SPERM HY-LITER™ kits, have been useful to detect human sperm via strong fluorescence. These kits have been examined from various forensic aspects. However, because of a lack of evaluation methods, these studies did not provide objective, or quantitative, descriptions of the results nor clear criteria for the decisions reached. In addition, the variety of validations was considerably limited. In this study, we conducted more advanced validations of SPERM HY-LITER™ Express using our established image analysis method. Use of this method enabled objective and specific identification of fluorescent sperm's spots and quantitative comparisons of the sperm detection performance under complex experimental conditions. For body fluid mixtures, we examined interference with the fluorescence staining from other body fluid components. Effects of sample decomposition were simulated in high humidity and high temperature conditions. Semen with quite low sperm concentrations, such as azoospermia and oligospermia samples, represented the most challenging cases in application of the kit. Finally, the tolerance of the kit against various acidic and basic environments was analyzed. The validations herein provide useful information for the practical applications of the SPERM HY-LITER™ Express kit, which were previously unobtainable. Moreover, the versatility of our image analysis method toward various complex cases was demonstrated.
Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns
Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain
2015-01-01
Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917
Analysis of Thiodiglycol: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS777
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for the analysis of thiodiglycol, the breakdown product of the sulfur mustard HD, in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS777 (hereafter referred to as EPA CRL SOP MS777). This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to verifymore » the analytical procedures described in MS777 for analysis of thiodiglycol in aqueous samples. The gathered data from this study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS777 can be determined.« less
Huet, S; Marie, J P; Gualde, N; Robert, J
1998-12-15
Multidrug resistance (MDR) associated with overexpression of the MDR1 gene and of its product, P-glycoprotein (Pgp), plays an important role in limiting cancer treatment efficacy. Many studies have investigated Pgp expression in clinical samples of hematological malignancies but failed to give definitive conclusion on its usefulness. One convenient method for fluorescent detection of Pgp in malignant cells is flow cytometry which however gives variable results from a laboratory to another one, partly due to the lack of a reference method rigorously tested. The purpose of this technical note is to describe each step of a reference flow cytometric method. The guidelines for sample handling, staining and analysis have been established both for Pgp detection with monoclonal antibodies directed against extracellular epitopes (MRK16, UIC2 and 4E3), and for Pgp functional activity measurement with Rhodamine 123 as a fluorescent probe. Both methods have been validated on cultured cell lines and clinical samples by 12 laboratories of the French Drug Resistance Network. This cross-validated multicentric study points out crucial steps for the accuracy and reproducibility of the results, like cell viability, data analysis and expression.
EPA (ENVIRONMENTAL PROTECTION AGENCY) METHOD STUDY 28, PCB'S (POLYCHLORINATED BIPHENYLS) IN OIL
This report describes the experimental design and the results of the validation study for two analytical methods to detect polychlorinated byphenyls in oil. The methods analyzed for four PCB Aroclors (1016, 1242, 1254, and 1260), 2-chlorobiphenyl, and decachlorobiphenyl. The firs...
Extension of the ratio method to low energy
Colomer, Frederic; Capel, Pierre; Nunes, F. M.; ...
2016-05-25
The ratio method has been proposed as a means to remove the reaction model dependence in the study of halo nuclei. Originally, it was developed for higher energies but given the potential interest in applying the method at lower energy, in this work we explore its validity at 20 MeV/nucleon. The ratio method takes the ratio of the breakup angular distribution and the summed angular distribution (which includes elastic, inelastic and breakup) and uses this observable to constrain the features of the original halo wave function. In this work we use the Continuum Discretized Coupled Channel method and the Coulomb-correctedmore » Dynamical Eikonal Approximation for the study. We study the reactions of 11Be on 12C, 40Ca and 208Pb at 20 MeV/nucleon. We compare the various theoretical descriptions and explore the dependence of our result on the core-target interaction. Lastly, our study demonstrates that the ratio method is valid at these lower beam energies.« less
NASA Astrophysics Data System (ADS)
Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.
2016-07-01
The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.
Tutorial in Biostatistics: Instrumental Variable Methods for Causal Inference*
Baiocchi, Michael; Cheng, Jing; Small, Dylan S.
2014-01-01
A goal of many health studies is to determine the causal effect of a treatment or intervention on health outcomes. Often, it is not ethically or practically possible to conduct a perfectly randomized experiment and instead an observational study must be used. A major challenge to the validity of observational studies is the possibility of unmeasured confounding (i.e., unmeasured ways in which the treatment and control groups differ before treatment administration which also affect the outcome). Instrumental variables analysis is a method for controlling for unmeasured confounding. This type of analysis requires the measurement of a valid instrumental variable, which is a variable that (i) is independent of the unmeasured confounding; (ii) affects the treatment; and (iii) affects the outcome only indirectly through its effect on the treatment. This tutorial discusses the types of causal effects that can be estimated by instrumental variables analysis; the assumptions needed for instrumental variables analysis to provide valid estimates of causal effects and sensitivity analysis for those assumptions; methods of estimation of causal effects using instrumental variables; and sources of instrumental variables in health studies. PMID:24599889
Brown, Paula N.; Chan, Michael; Paley, Lori; Betz, Joseph M.
2013-01-01
A method previously validated to determine caftaric acid, chlorogenic acid, cynarin, echinacoside, and cichoric acid in echinacea raw materials has been successfully applied to dry extract and liquid tincture products in response to North American consumer needs. Single-laboratory validation was used to assess the repeatability, accuracy, selectivity, LOD, LOQ, analyte stability (ruggedness), and linearity of the method, with emphasis on finished products. Repeatability precision for each phenolic compound was between 1.04 and 5.65% RSD, with HorRat values between 0.30 and 1.39 for raw and dry extract finished products. HorRat values for tinctures were between 0.09 and 1.10. Accuracy of the method was determined through spike recovery studies. Recovery of each compound from raw material negative control (ginseng) was between 90 and 114%, while recovery from the finished product negative control (maltodextrin and magnesium stearate) was between 97 and 103%. A study was conducted to determine if cichoric acid, a major phenolic component of Echinacea purpurea (L.) Moench and E. angustifolia DC, degrades during sample preparation (extraction) and HPLC analysis. No significant degradation was observed over an extended testing period using the validated method. PMID:22165004
Brown, Paula N; Chan, Michael; Paley, Lori; Betz, Joseph M
2011-01-01
A method previously validated to determine caftaric acid, chlorogenic acid, cynarin, echinacoside, and cichoric acid in echinacea raw materials has been successfully applied to dry extract and liquid tincture products in response to North American consumer needs. Single-laboratory validation was used to assess the repeatability, accuracy, selectivity, LOD, LOQ, analyte stability (ruggedness), and linearity of the method, with emphasis on finished products. Repeatability precision for each phenolic compound was between 1.04 and 5.65% RSD, with HorRat values between 0.30 and 1.39 for raw and dry extract finished products. HorRat values for tinctures were between 0.09 and 1.10. Accuracy of the method was determined through spike recovery studies. Recovery of each compound from raw material negative control (ginseng) was between 90 and 114%, while recovery from the finished product negative control (maltodextrin and magnesium stearate) was between 97 and 103%. A study was conducted to determine if cichoric acid, a major phenolic component of Echinacea purpurea (L.) Moench and E. angustifolia DC, degrades during sample preparation (extraction) and HPLC analysis. No significant degradation was observed over an extended testing period using the validated method.
Police, Anitha; Gurav, Sandip; Dhiman, Vinay; Zainuddin, Mohd; Bhamidipati, Ravi Kanth; Rajagopal, Sriram; Mullangi, Ramesh
2015-11-01
A simple, specific, sensitive and reproducible high-performance liquid chromatography (HPLC) assay method has been developed and validated for the estimation of odanacatib in rat and human plasma. The bioanalytical procedure involves extraction of odanacatib and itraconazole (internal standard, IS) from a 200 μL plasma aliquot with simple liquid-liquid extraction process. Chromatographic separation was achieved on a Symmetry Shield RP18 using an isocratic mobile phase at a flow rate of 0.7 mL/min. The UV detection wave length was 268 nm. Odanacatib and IS eluted at 5.5 and 8.6 min, respectively with a total run time of 10 min. Method validation was performed as per US Food and Drug Administration guidelines and the results met the acceptance criteria. The calibration curve was linear over a concentration range of 50.9-2037 ng/mL (r(2) = 0.994). The intra- and inter-day precisions were in the range of 2.06-5.11 and 5.84-13.1%, respectively, in rat plasma and 2.38-7.90 and 6.39-10.2%, respectively, in human plasma. The validated HPLC method was successfully applied to a pharmacokinetic study in rats. Copyright © 2015 John Wiley & Sons, Ltd.
Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.
2006-01-01
This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466
Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M
2006-03-01
This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.
What is the best method for assessing lower limb force-velocity relationship?
Giroux, C; Rabita, G; Chollet, D; Guilhem, G
2015-02-01
This study determined the concurrent validity and reliability of force, velocity and power measurements provided by accelerometry, linear position transducer and Samozino's methods, during loaded squat jumps. 17 subjects performed squat jumps on 2 separate occasions in 7 loading conditions (0-60% of the maximal concentric load). Force, velocity and power patterns were averaged over the push-off phase using accelerometry, linear position transducer and a method based on key positions measurements during squat jump, and compared to force plate measurements. Concurrent validity analyses indicated very good agreement with the reference method (CV=6.4-14.5%). Force, velocity and power patterns comparison confirmed the agreement with slight differences for high-velocity movements. The validity of measurements was equivalent for all tested methods (r=0.87-0.98). Bland-Altman plots showed a lower agreement for velocity and power compared to force. Mean force, velocity and power were reliable for all methods (ICC=0.84-0.99), especially for Samozino's method (CV=2.7-8.6%). Our findings showed that present methods are valid and reliable in different loading conditions and permit between-session comparisons and characterization of training-induced effects. While linear position transducer and accelerometer allow for examining the whole time-course of kinetic patterns, Samozino's method benefits from a better reliability and ease of processing. © Georg Thieme Verlag KG Stuttgart · New York.
Miles, Dale R; Mesfin, Mimi; Mody, Tarak D; Stiles, Mark; Lee, Jean; Fiene, John; Denis, Bernie; Boswell, Garry W
2006-05-01
Liquid chromatography-fluorescence (LC-FLS), liquid chromatography-tandem mass spectrometry (LC-MS/MS) and inductively coupled plasma-mass spectrometry (ICP-MS) methods were developed and validated for the evaluation of motexafin gadolinium (MGd, Xcytrin) pharmacokinetics and biodistribution in plasma and tissues. The LC-FLS method exhibited the greatest sensitivity (0.0057 microg mL(-1)), and was used for pharmacokinetic, biodistribution, and protein binding studies with small sample sizes or low MGd concentrations. The LC-MS/MS method, which exhibited a short run time and excellent selectivity, was used for routine clinical plasma sample analysis. The ICP-MS method, which measured total Gd, was used in conjunction with LC methods to assess MGd stability in plasma. All three methods were validated using human plasma. The LC-FLS method was also validated using plasma, liver and kidneys from mice and rats. All three methods were shown to be accurate, precise and robust for each matrix validated. For three mice, the mean (standard deviation) concentration of MGd in plasma/tissues taken 5 hr after dosing with 23 mg kg(-1) MGd was determined by LC-FLS as follows: plasma (0.025+/-0.002 microg mL(-1)), liver (2.89+/-0.45 microg g(-1)), and kidney (6.09+/-1.05 microg g(-1)). Plasma samples from a subset of patients with brain metastases from extracranial tumors were analyzed using both LC-MS/MS and ICP-MS methods. For a representative patient, > or = 90% of the total Gd in plasma was accounted for as MGd over the first hour post dosing. By 24 hr post dosing, 63% of total Gd was accounted for as MGd, indicating some metabolism of MGd.
Kubachka, Kevin; Heitkemper, Douglas T; Conklin, Sean
2017-07-01
Before being designated AOAC First Action Official MethodSM 2016.04, the U.S. Food and Drug Administration's method, EAM 4.10 High Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometric Determination of Four Arsenic Species in Fruit Juice, underwent both a single-laboratory validation and a multilaboratory validation (MLV) study. Three federal and five state regulatory laboratories participated in the MLV study, which is the primary focus of this manuscript. The method was validated for inorganic arsenic (iAs) measured as the sum of the two iAs species arsenite [As(III)] and arsenate [As(V)], dimethylarsinic acid (DMA), and monomethylarsonic acid (MMA) by analyses of 13 juice samples, including three apple juice, three apple juice concentrate, four grape juice, and three pear juice samples. In addition, two water Standard Reference Materials (SRMs) were analyzed. The method LODs and LOQs obtained among the eight laboratories were approximately 0.3 and 2 ng/g, respectively, for each of the analytes and were adequate for the intended purpose of the method. Each laboratory analyzed method blanks, fortified method blanks, reference materials, triplicate portions of each juice sample, and duplicate fortified juice samples (one for each matrix type) at three fortification levels. In general, repeatability and reproducibility of the method was ≤15% RSD for each species present at a concentration >LOQ. The average recovery of fortified analytes for all laboratories ranged from 98 to 104% iAs, DMA, and MMA for all four juice sample matrixes. The average iAs results for SRMs 1640a and 1643e agreed within the range of 96-98% of certified values for total arsenic.
Emory, Joshua F.; Seserko, Lauren A.; Marzinke, Mark A.
2014-01-01
Background Maraviroc is a CCR5 antagonist that has been utilized as a viral entry inhibitor in the management of HIV-1. Current clinical trials are pursuing maraviroc drug efficacy in both oral and topical formulations. Therefore, in order to fully understand drug pharmacokinetics, a sensitive method is required to quantify plasma drug concentrations. Methods Maraviroc-spiked plasma was combined with acetonitrile containing an isotopically-labeled internal standard, and following protein precipitation, samples were evaporated to dryness and reconstituted for liquid chromatographic-tandem mass spectrometric (LC-MS/MS) analysis. Chromatographic separation was achieved on a Waters BEH C8, 50 × 2.1 mm UPLC column, with a 1.7 μm particle size and the eluent was analyzed using an API 4000 mass analyzer in selected reaction monitoring mode. The method was validated as per FDA Bioanalytical Method Validation guidelines. Results The analytical measuring range of the LC-MS/MS method is 0.5-1000 ng/ml. Calibration curves were generated using weighted 1/x2 quadratic regression. Inter-and intra-assay precision was ≤ 5.38% and ≤ 5.98%, respectively; inter-and intra-assay accuracy (%DEV) was ≤ 10.2% and ≤ 8.44%, respectively. Additional studies illustrated similar matrix effects between maraviroc and its internal standard, and that maraviroc is stable under a variety of conditions. Method comparison studies with a reference LC-MS/MS method show a slope of 0.948 with a Spearman coefficient of 0.98. Conclusions Based on the validation metrics, we have generated a sensitive and automated LC-MS/MS method for maraviroc quantification in human plasma. PMID:24561264
Wenzl, Thomas; Karasek, Lubomir; Rosen, Johan; Hellenaes, Karl-Erik; Crews, Colin; Castle, Laurence; Anklam, Elke
2006-11-03
A European inter-laboratory study was conducted to validate two analytical procedures for the determination of acrylamide in bakery ware (crispbreads, biscuits) and potato products (chips), within a concentration range from about 20 microg/kg to about 9000 microgg/kg. The methods are based on gas chromatography-mass spectrometry (GC-MS) of the derivatised analyte and on high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) of native acrylamide. Isotope dilution with isotopically labelled acrylamide was an integral part of both methods. The study was evaluated according to internationally accepted guidelines. The performance of the HPLC-MS/MS method was found to be superior to that of the GC-MS method and to be fit-for-the-purpose.
Gamifying Self-Management of Chronic Illnesses: A Mixed-Methods Study
Wills, Gary; Ranchhod, Ashok
2016-01-01
Background Self-management of chronic illnesses is an ongoing issue in health care research. Gamification is a concept that arose in the field of computer science and has been borrowed by many other disciplines. It is perceived by many that gamification can improve the self-management experience of people with chronic illnesses. This paper discusses the validation of a framework (called The Wheel of Sukr) that was introduced to achieve this goal. Objective This research aims to (1) discuss a gamification framework targeting the self-management of chronic illnesses and (2) validate the framework by diabetic patients, medical professionals, and game experts. Methods A mixed-method approach was used to validate the framework. Expert interviews (N=8) were conducted in order to validate the themes of the framework. Additionally, diabetic participants completed a questionnaire (N=42) in order to measure their attitudes toward the themes of the framework. Results The results provide a validation of the framework. This indicates that gamification might improve the self-management of chronic illnesses, such as diabetes. Namely, the eight themes in the Wheel of Sukr (fun, esteem, socializing, self-management, self-representation, motivation, growth, sustainability) were perceived positively by 71% (30/42) of the participants with P value <.001. Conclusions In this research, both the interviews and the questionnaire yielded positive results that validate the framework (The Wheel of Sukr). Generally, this study indicates an overall acceptance of the notion of gamification in the self-management of diabetes. PMID:27612632
Bujold, M; El Sherif, R; Bush, P L; Johnson-Lafleur, J; Doray, G; Pluye, P
2018-02-01
This mixed methods study content validated the Information Assessment Method for parents (IAM-parent) that allows users to systematically rate and comment on online parenting information. Quantitative data and results: 22,407 IAM ratings were collected; of the initial 32 items, descriptive statistics showed that 10 had low relevance. Qualitative data and results: IAM-based comments were collected, and 20 IAM users were interviewed (maximum variation sample); the qualitative data analysis assessed the representativeness of IAM items, and identified items with problematic wording. Researchers, the program director, and Web editors integrated quantitative and qualitative results, which led to a shorter and clearer IAM-parent. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer
2014-01-24
In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Fractal Clustering and Knowledge-driven Validation Assessment for Gene Expression Profiling.
Wang, Lu-Yong; Balasubramanian, Ammaiappan; Chakraborty, Amit; Comaniciu, Dorin
2005-01-01
DNA microarray experiments generate a substantial amount of information about the global gene expression. Gene expression profiles can be represented as points in multi-dimensional space. It is essential to identify relevant groups of genes in biomedical research. Clustering is helpful in pattern recognition in gene expression profiles. A number of clustering techniques have been introduced. However, these traditional methods mainly utilize shape-based assumption or some distance metric to cluster the points in multi-dimension linear Euclidean space. Their results shows poor consistence with the functional annotation of genes in previous validation study. From a novel different perspective, we propose fractal clustering method to cluster genes using intrinsic (fractal) dimension from modern geometry. This method clusters points in such a way that points in the same clusters are more self-affine among themselves than to the points in other clusters. We assess this method using annotation-based validation assessment for gene clusters. It shows that this method is superior in identifying functional related gene groups than other traditional methods.
CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira
2013-01-01
Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076
de Bruijne, Martine C; Zwijnenberg, Nicolien C; Jansma, Elise P; van Dyck, Cathy; Wagner, Cordula
2014-01-01
Aim: To evaluate the evidence of the effectiveness of classroom-based Crew Resource Management training on safety culture by a systematic review of literature. Methods: Studies were identified in PubMed, Cochrane Library, PsycINFO, and Educational Resources Information Center up to 19 December 2012. The Methods Guide for Comparative Effectiveness Reviews was used to assess the risk of bias in the individual studies. Results: In total, 22 manuscripts were included for review. Training settings, study designs, and evaluation methods varied widely. Most studies reporting only a selection of culture dimensions found mainly positive results, whereas studies reporting all safety culture dimensions of the particular survey found mixed results. On average, studies were at moderate risk of bias. Conclusion: Evidence of the effectiveness of Crew Resource Management training in health care on safety culture is scarce and the validity of most studies is limited. The results underline the necessity of more valid study designs, preferably using triangulation methods. PMID:26770720
Wack, Katy; Drogowski, Laura; Treloar, Murray; Evans, Andrew; Ho, Jonhan; Parwani, Anil; Montalto, Michael C
2016-01-01
Text-based reporting and manual arbitration for whole slide imaging (WSI) validation studies are labor intensive and do not allow for consistent, scalable, and repeatable data collection or analysis. The objective of this study was to establish a method of data capture and analysis using standardized codified checklists and predetermined synoptic discordance tables and to use these methods in a pilot multisite validation study. Fifteen case report form checklists were generated from the College of American Pathology cancer protocols. Prior to data collection, all hypothetical pairwise comparisons were generated, and a level of harm was determined for each possible discordance. Four sites with four pathologists each generated 264 independent reads of 33 cases. Preestablished discordance tables were applied to determine site by site and pooled accuracy, intrareader/intramodality, and interreader intramodality error rates. Over 10,000 hypothetical pairwise comparisons were evaluated and assigned harm in discordance tables. The average difference in error rates between WSI and glass, as compared to ground truth, was 0.75% with a lower bound of 3.23% (95% confidence interval). Major discordances occurred on challenging cases, regardless of modality. The average inter-reader agreement across sites for glass was 76.5% (weighted kappa of 0.68) and for digital it was 79.1% (weighted kappa of 0.72). These results demonstrate the feasibility and utility of employing standardized synoptic checklists and predetermined discordance tables to gather consistent, comprehensive diagnostic data for WSI validation studies. This method of data capture and analysis can be applied in large-scale multisite WSI validations.
Jones, Kelly K; Zenk, Shannon N; Tarlov, Elizabeth; Powell, Lisa M; Matthews, Stephen A; Horoi, Irina
2017-01-07
Food environment characterization in health studies often requires data on the location of food stores and restaurants. While commercial business lists are commonly used as data sources for such studies, current literature provides little guidance on how to use validation study results to make decisions on which commercial business list to use and how to maximize the accuracy of those lists. Using data from a retrospective cohort study [Weight And Veterans' Environments Study (WAVES)], we (a) explain how validity and bias information from existing validation studies (count accuracy, classification accuracy, locational accuracy, as well as potential bias by neighborhood racial/ethnic composition, economic characteristics, and urbanicity) were used to determine which commercial business listing to purchase for retail food outlet data and (b) describe the methods used to maximize the quality of the data and results of this approach. We developed data improvement methods based on existing validation studies. These methods included purchasing records from commercial business lists (InfoUSA and Dun and Bradstreet) based on store/restaurant names as well as standard industrial classification (SIC) codes, reclassifying records by store type, improving geographic accuracy of records, and deduplicating records. We examined the impact of these procedures on food outlet counts in US census tracts. After cleaning and deduplicating, our strategy resulted in a 17.5% reduction in the count of food stores that were valid from those purchased from InfoUSA and 5.6% reduction in valid counts of restaurants purchased from Dun and Bradstreet. Locational accuracy was improved for 7.5% of records by applying street addresses of subsequent years to records with post-office (PO) box addresses. In total, up to 83% of US census tracts annually experienced a change (either positive or negative) in the count of retail food outlets between the initial purchase and the final dataset. Our study provides a step-by-step approach to purchase and process business list data obtained from commercial vendors. The approach can be followed by studies of any size, including those with datasets too large to process each record by hand and will promote consistency in characterization of the retail food environment across studies.
Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom
2015-01-01
Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
Mohammadifard, Noushin; Omidvar, Nasrin; Houshiarrad, Anahita; Neyestani, Tirang; Naderi, Gholam-Ali; Soleymani, Bahram
2011-01-01
BACKGROUND: This study's aim was to design and validate a semi-quantitative food frequency questionnaire (FFQ) for assessment of fruits and vegetables (FV) consumption in adults of Isfahan by comparing the FFQ with dietary reference method and blood plasma levels of beta-carotene, vitamin C, and retinol. METHODS: This validation study was performed on 123 healthy adults of Isfahan. FV intake was assessed using a 110-item FFQ. Data collection was performed during two different time periods to control for seasonal effects, fall/winter (cold season) and spring/summer (warm season). In each phase a FFQ and 1 day recall, and 2 days of food records as the dietary reference method were completed and plasma vitamin C, beta-carotene and retinol were measured. Data was analyzed by Pearson or Spearman and intraclass correlations. RESULTS: Serum Lipids, sex, age, body mass index (BMI) and educational level adjusted Pearson correlation coefficient of FV with plasma vitamin C, beta-carotene and retinol were 0.55, 0.47 and 0.28 in the cold season (p < 0.05) and 0.52, 0.45 and 0.35 in the warm season (p < 0.001), respectively. Energy and fat intake, sex, age, BMI and educational level adjusted Pearson correlation coefficient for FV with dietary reference method in the cold and warm seasons were 0.62 and 0.60, respectively (p < 0.001). Intraclass correlation for reproducibility of FFQ in FV was 0.65 (p<0.001). CONCLUSIONS: The designed FFQ had a good criterion validity and reproducibility for assessment of FV intake. Thus, it can serve as a valid tool in epidemiological studies to assess fruit and vegetable intake. PMID:22973322
Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard
2017-04-01
Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.
ERIC Educational Resources Information Center
Pan, Jia-Yan; Wong, Daniel Fu Keung; Chan, Kin Sun; Chan, Cecilia Lai Wan
2008-01-01
Objective: The objective of this study is to develop and validate the Chinese Making Sense of Adversity Scale (CMSAS) to measure the cognitive coping strategies that Chinese people adopt to make sense of adversity. Method: A 12-item CMSAS was developed by in-depth interview and item analysis. The scale was validated with a sample of 627 Chinese…
ERIC Educational Resources Information Center
Masson, J. D.; Dagnan, D.; Evans, J.
2010-01-01
Background: There is a need for validated, standardised tools for the assessment of executive functions in adults with intellectual disabilities (ID). This study examines the validity of a test of planning and problem solving (Tower of London) with adults with ID. Method: Participants completed an adapted version of the Tower of London (ToL) while…
ERIC Educational Resources Information Center
Al-Motlaq, Mohammad A.; Abuidhail, Jamila; Salameh, Taghreed; Awwad, Wesam
2017-01-01
Objective: To develop an instrument to study family-centred care (FCC) in traditional open bay Neonatal Intensive Care Units (NICUs). Methods: The development process involved constructing instrument's items, establishing content validity by an expert panel and testing the instrument for validity and reliability with a convenience sample of 25…
Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.
Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit
2007-10-11
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.
Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders
Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit
2007-01-01
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less
A Validity and Reliability Study of the Basic Electronics Skills Self-Efficacy Scale (BESS)
ERIC Educational Resources Information Center
Korkmaz, Ö.; Korkmaz, M. K.
2016-01-01
The aim of this study is to improve a measurement tool to evaluate the self-efficacy of Electrical-Electronics Engineering students through their basic electronics skills. The sample group is composed of 124 Electrical-Electronics engineering students. The validity of the scale is analyzed with two different methods through factor analysis and…
Assessing College Student-Athletes' Life Stress: Initial Measurement Development and Validation
ERIC Educational Resources Information Center
Lu, Frank Jing-Horng; Hsu, Ya-Wen; Chan, Yuan-Shuo; Cheen, Jang-Rong; Kao, Kuei-Tsu
2012-01-01
College student-athletes have unique life stress that warrants close attention. The purpose of this study was to develop a reliable and valid measurement assessing college student-athletes' life stress. In Study 1, a focus group discussion and Delphi method produced a questionnaire draft, termed the College Student-Athletes' Life Stress Scale. In…
Spanish Adaptation and Validation of the Family Quality of Life Survey
ERIC Educational Resources Information Center
Verdugo, M. A.; Cordoba, L.; Gomez, J.
2005-01-01
Background: Assessing the quality of life (QOL) for families that include a person with a disability have recently become a major emphasis in cross-cultural QOL studies. The present study examined the reliability and validity of the Family Quality of Life Survey (FQOL) on a Spanish sample. Method and Results: The sample comprised 385 families who…
Family Early Literacy Practices Questionnaire: A Validation Study for a Spanish-Speaking Population
ERIC Educational Resources Information Center
Lewis, Kandia
2012-01-01
The purpose of the current study was to evaluate the psychometric validity of a Spanish translated version of a family involvement questionnaire (the FELP) using a mixed-methods design. Thus, statistical analyses (i.e., factor analysis, reliability analysis, and item analysis) and qualitative analyses (i.e., focus group data) were assessed.…
Measuring the Quality of Life of University Students. Research Monograph Series. Volume 1.
ERIC Educational Resources Information Center
Roberts, Lance W.; Clifton, Rodney A.
This study sought to develop a valid set of scales in the cognitive and affective domains for measuring the quality of life of university students. In addition the study attempted to illustrate the usefulness of Thomas Piazza's procedures for constructing valid scales in educational research. Piazza's method involves a multi-step construction of…
A New Method for Analyzing Content Validity Data Using Multidimensional Scaling
ERIC Educational Resources Information Center
Li, Xueming; Sireci, Stephen G.
2013-01-01
Validity evidence based on test content is of essential importance in educational testing. One source for such evidence is an alignment study, which helps evaluate the congruence between tested objectives and those specified in the curriculum. However, the results of an alignment study do not always sufficiently capture the degree to which a test…
Validation of the Parenting Stress Index--Short Form with Minority Caregivers
ERIC Educational Resources Information Center
Lee, Sang Jung; Gopalan, Geetha; Harrington, Donna
2016-01-01
Objectives: There has been little examination of the structural validity of the Parenting Stress Index--Short Form (PSI-SF) for minority populations in clinical contexts in the Unites States. This study aimed to test prespecified factor structures (one-factor, two-factor, and three-factor models) of the PSI-SF. Methods: This study used…
Fischedick, Justin T; Glas, Ronald; Hazekamp, Arno; Verpoorte, Rob
2009-01-01
Cannabis and cannabinoid based medicines are currently under serious investigation for legitimate development as medicinal agents, necessitating new low-cost, high-throughput analytical methods for quality control. The goal of this study was to develop and validate, according to ICH guidelines, a simple rapid HPTLC method for the quantification of Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and qualitative analysis of other main neutral cannabinoids found in cannabis. The method was developed and validated with the use of pure cannabinoid reference standards and two medicinal cannabis cultivars. Accuracy was determined by comparing results obtained from the HTPLC method with those obtained from a validated HPLC method. Delta(9)-THC gives linear calibration curves in the range of 50-500 ng at 206 nm with a linear regression of y = 11.858x + 125.99 and r(2) = 0.9968. Results have shown that the HPTLC method is reproducible and accurate for the quantification of Delta(9)-THC in cannabis. The method is also useful for the qualitative screening of the main neutral cannabinoids found in cannabis cultivars.
Natural language processing in pathology: a scoping review.
Burger, Gerard; Abu-Hanna, Ameen; de Keizer, Nicolette; Cornet, Ronald
2016-07-22
Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text. We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents. Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice. The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation. Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron
2014-01-01
The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
McLeod, Jessica; Chen, Tzu-An; Nicklas, Theresa A.; Baranowski, Tom
2013-01-01
Abstract Background Television viewing is an important modifiable risk factor for childhood obesity. However, valid methods for measuring children's TV viewing are sparse and few studies have included Latinos, a population disproportionately affected by obesity. The goal of this study was to test the reliability and convergent validity of four TV viewing measures among low-income Latino preschool children in the United States. Methods Latino children (n=96) ages 3–5 years old were recruited from four Head Start centers in Houston, Texas (January, 2009, to June, 2010). TV viewing was measured concurrently over 7 days by four methods: (1) TV diaries (parent reported), (2) sedentary time (accelerometry), (3) TV Allowance (an electronic TV power meter), and (4) Ecological Momentary Assessment (EMA) on personal digital assistants (parent reported). This 7-day procedure was repeated 3–4 weeks later. Test–retest reliability was determined by intraclass correlations (ICC). Spearman correlations (due to nonnormal distributions) were used to determine convergent validity compared to the TV diary. Results The TV diary had the highest test–retest reliability (ICC=0.82, p<0.001), followed by the TV Allowance (ICC=0.69, p<0.001), EMA (ICC=0.46, p<0.001), and accelerometry (ICC=0.36–0.38, p<0.01). The TV Allowance (r=0.45–0.55, p<0.001) and EMA (r=0.47–0.51, p<0.001) methods were significantly correlated with TV diaries. Accelerometer-determined sedentary minutes were not correlated with TV diaries. The TV Allowance and EMA methods were significantly correlated with each other (r=0.48–0.53, p<0.001). Conclusions The TV diary is feasible and is the most reliable method for measuring US Latino preschool children's TV viewing. PMID:23270534
Predicting implementation from organizational readiness for change: a study protocol
2011-01-01
Background There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment. Objectives Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias. Methods and Design We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique. Discussion We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs. PMID:21777479
Silva, Simone Alves da; Sampaio, Geni Rodrigues; Torres, Elizabeth Aparecida Ferraz da Silva
2017-04-15
Among the different food categories, the oils and fats are important sources of exposure to polycyclic aromatic hydrocarbons (PAHs), a group of organic chemical contaminants. The use of a validated method is essential to obtain reliable analytical results since the legislation establishes maximum limits in different foods. The objective of this study was to optimize and validate a method for the quantification of four PAHs [benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(a)pyrene] in vegetable oils. The samples were submitted to liquid-liquid extraction, followed by solid-phase extraction, and analyzed by ultra-high performance liquid chromatography. Under the optimized conditions, the validation parameters were evaluated according to the INMETRO Guidelines: linearity (r2 >0.99), selectivity (no matrix interference), limits of detection (0.08-0.30μgkg -1 ) and quantification (0.25-1.00μgkg -1 ), recovery (80.13-100.04%), repeatability and intermediate precision (<10% RSD). The method was found to be adequate for routine analysis of PAHs in the vegetable oils evaluated. Copyright © 2016. Published by Elsevier Ltd.
Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire
Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra
2018-05-29
Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Methodological and cross sectional study. A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain.
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-02-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.
Wiegers, Ann L
2003-07-01
Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.
Austin, S Bryn; Gordon, Allegra R; Kennedy, Grace A; Sonneville, Kendrin R; Blossom, Jeffrey; Blood, Emily A
2013-12-06
Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry.
Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS
Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo
2016-01-01
This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178
Austin, S. Bryn; Gordon, Allegra R.; Kennedy, Grace A.; Sonneville, Kendrin R.; Blossom, Jeffrey; Blood, Emily A.
2013-01-01
Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry. PMID:24322394
This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...
Sigilai, Antipa; Hassan, Amin S.; Thoya, Janet; Odhiambo, Rachael; Van de Vijver, Fons J. R.; Newton, Charles R. J. C.; Abubakar, Amina
2017-01-01
Background Despite bearing the largest HIV-related burden, little is known of the Health-Related Quality of Life (HRQoL) among people living with HIV in sub-Saharan Africa. One of the factors contributing to this gap in knowledge is the lack of culturally adapted and validated measures of HRQoL that are relevant for this setting. Aims We set out to adapt the Functional Assessment of HIV Infection (FAHI) Questionnaire, an HIV-specific measure of HRQoL, and evaluate its internal consistency and validity. Methods The three phase mixed-methods study took place in a rural setting at the Kenyan Coast. Phase one involved a scoping review to describe the evidence base of the reliability and validity of FAHI as well as the geographical contexts in which it has been administered. Phase two involved in-depth interviews (n = 38) to explore the content validity, and initial piloting for face validation of the adapted FAHI. Phase three was quantitative (n = 103) and evaluated the internal consistency, convergent and construct validities of the adapted interviewer-administered questionnaire. Results In the first phase of the study, we identified 16 studies that have used the FAHI. Most (82%) were conducted in North America. Only seven (44%) of the reviewed studies reported on the psychometric properties of the FAHI. In the second phase, most of the participants (37 out of 38) reported satisfaction with word clarity and content coverage whereas 34 (89%) reported satisfaction with relevance of the items, confirming the face validity of the adapted questionnaire during initial piloting. Our participants indicated that HIV impacted on their physical, functional, emotional, and social wellbeing. Their responses overlapped with items in four of the five subscales of the FAHI Questionnaire establishing its content validity. In the third phase, the internal consistency of the scale was found to be satisfactory with subscale Cronbach’s α ranging from 0.55 to 0.78. The construct and convergent validity of the tool were supported by acceptable factor loadings for most of the items on the respective sub-scales and confirmation of expected significant correlations of the FAHI subscale scores with scores of a measure of common mental disorders. Conclusion The adapted interviewer-administered Swahili version of FAHI questionnaire showed initial strong evidence of good psychometric properties with satisfactory internal consistency and acceptable validity (content, face, and convergent validity). It gives impetus for further validation work, especially construct validity, in similar settings before it can be used for research and clinical purposes in the entire East African region. PMID:28380073
Development and validation of a yoga module for Parkinson disease.
Kakde, Noopur; Metri, Kashinath G; Varambally, Shivarama; Nagaratna, Raghuram; Nagendra, H R
2017-03-25
Background Parkinson's disease (PD), a progressive neurodegenerative disease, affects motor and nonmotor functions, leading to severe debility and poor quality of life. Studies have reported the beneficial role of yoga in alleviating the symptoms of PD; however, a validated yoga module for PD is unavailable. This study developed and validated an integrated yoga module(IYM) for PD. Methods The IYM was prepared after a thorough review of classical yoga texts and previous findings. Twenty experienced yoga experts, who fulfilled the inclusion criteria, were selected validating the content of the IYM. A total of 28 practices were included in the IYM, and each practice was discussed and rated as (i) not essential, (ii) useful but not essential, and (iii) essential; the content validity ratio (CVR) was calculated using Lawshe's formula. Results Data analysis revealed that of the 28 IYM practices, 21 exhibited significant content validity (cut-off value: 0.42, as calculated by applying Lawshe's formula for the CVR). Conclusions The IYM is valid for PD, with good content validity. However, future studies must determine the feasibility and efficacy of the developed module.
Berdeaux, Gilles; Meunier, Juliette; Arnould, Benoit; Viala-Danten, Muriel
2010-05-24
The purpose of this study was to reduce the number of items, create a scoring method and assess the psychometric properties of the Freedom from Glasses Value Scale (FGVS), which measures benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal intraocular lens (IOL) surgery. The 21-item FGVS, developed simultaneously in French and Spanish, was administered by phone during an observational study to 152 French and 152 Spanish patients who had undergone cataract or presbyopia surgery at least 1 year before the study. Reduction of items and creation of the scoring method employed statistical methods (principal component analysis, multitrait analysis) and content analysis. Psychometric properties (validation of the structure, internal consistency reliability, and known-group validity) of the resulting version were assessed in the pooled population and per country. One item was deleted and 3 were kept but not aggregated in a dimension. The other 17 items were grouped into 2 dimensions ('global evaluation', 9 items; 'advantages', 8 items) and divided into 5 sub-dimensions, with higher scores indicating higher benefit of surgery. The structure was validated (good item convergent and discriminant validity). Internal consistency reliability was good for all dimensions and sub-dimensions (Cronbach's alphas above 0.70). The FGVS was able to discriminate between patients wearing glasses or not after surgery (higher scores for patients not wearing glasses). FGVS scores were significantly higher in Spain than France; however, the measure had similar psychometric performances in both countries. The FGVS is a valid and reliable instrument measuring benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal IOL surgery.
Tache, Florentin; Farca, Alexandru; Medvedovici, Andrei; David, Victor
2002-05-15
Both derivatization of free captopril in human plasma samples using monobromobimane as fluorescent label and the corresponding HPLC-fluorescence detection (FLD) method were validated. Calibration curve for the fluorescent captopril derivative in plasma samples is linear in the ppb-ppm range with a detection limit of 4 ppb and an identification limit of 10 ppb (P%: 90; nu > or = 5). These methods were successfully applied on bioequivalence studies carried out on some marketed pharmaceutical formulations.
2014-01-01
Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532
White, Sarah A; van den Broek, Nynke R
2004-05-30
Before introducing a new measurement tool it is necessary to evaluate its performance. Several statistical methods have been developed, or used, to evaluate the reliability and validity of a new assessment method in such circumstances. In this paper we review some commonly used methods. Data from a study that was conducted to evaluate the usefulness of a specific measurement tool (the WHO Colour Scale) is then used to illustrate the application of these methods. The WHO Colour Scale was developed under the auspices of the WHO to provide a simple portable and reliable method of detecting anaemia. This Colour Scale is a discrete interval scale, whereas the actual haemoglobin values it is used to estimate are on a continuous interval scale and can be measured accurately using electrical laboratory equipment. The methods we consider are: linear regression, correlation coefficients, paired t-tests plotting differences against mean values and deriving limits of agreement; kappa and weighted kappa statistics, sensitivity and specificity, an intraclass correlation coefficient and the repeatability coefficient. We note that although the definition and properties of each of these methods is well established inappropriate methods continue to be used in medical literature for assessing reliability and validity, as evidenced in the context of the evaluation of the WHO Colour Scale. Copyright 2004 John Wiley & Sons, Ltd.
Collins, Anne; Ross, Janine
2017-01-01
We performed a systematic review to identify all original publications describing the asymmetric inheritance of cellular organelles in normal animal eukaryotic cells and to critique the validity and imprecision of the evidence. Searches were performed in Embase, MEDLINE and Pubmed up to November 2015. Screening of titles, abstracts and full papers was performed by two independent reviewers. Data extraction and validity were performed by one reviewer and checked by a second reviewer. Study quality was assessed using the SYRCLE risk of bias tool, for animal studies and by developing validity tools for the experimental model, organelle markers and imprecision. A narrative data synthesis was performed. We identified 31 studies (34 publications) of the asymmetric inheritance of organelles after mitotic or meiotic division. Studies for the asymmetric inheritance of centrosomes (n = 9); endosomes (n = 6), P granules (n = 4), the midbody (n = 3), mitochondria (n = 3), proteosomes (n = 2), spectrosomes (n = 2), cilia (n = 2) and endoplasmic reticulum (n = 2) were identified. Asymmetry was defined and quantified by variable methods. Assessment of the statistical reliability of the results indicated only two studies (7%) were judged to have low concern, the majority of studies (77%) were 'unclear' and five (16%) were judged to have 'high concerns'; the main reasons were low technical repeats (<10). Assessment of model validity indicated that the majority of studies (61%) were judged to be valid, ten studies (32%) were unclear and two studies (7%) were judged to have 'high concerns'; both described 'stem cells' without providing experimental evidence to confirm this (pluripotency and self-renewal). Assessment of marker validity indicated that no studies had low concern, most studies were unclear (96.5%), indicating there were insufficient details to judge if the markers were appropriate. One study had high concern for marker validity due to the contradictory results of two markers for the same organelle. For most studies the validity and imprecision of results could not be confirmed. In particular, data were limited due to a lack of reporting of interassay variability, sample size calculations, controls and functional validation of organelle markers. An evaluation of 16 systematic reviews containing cell assays found that only 50% reported adherence to PRISMA or ARRIVE reporting guidelines and 38% reported a formal risk of bias assessment. 44% of the reviews did not consider how relevant or valid the models were to the research question. 75% reviews did not consider how valid the markers were. 69% of reviews did not consider the impact of the statistical reliability of the results. Future systematic reviews in basic or preclinical research should ensure the rigorous reporting of the statistical reliability of the results in addition to the validity of the methods. Increased awareness of the importance of reporting guidelines and validation tools is needed for the scientific community. PMID:28562636
Mudge, Elizabeth M; Liu, Ying; Lund, Jensen A; Brown, Paula N
2016-11-01
Suitably validated analytical methods that can be used to quantify medicinally active phytochemicals in natural health products are required by regulators, manufacturers, and consumers. Hawthorn ( Crataegus ) is a botanical ingredient in natural health products used for the treatment of cardiovascular disorders. A method for the quantitation of vitexin-2″- O - rhamnoside, vitexin, isovitexin, rutin, and hyperoside in hawthorn leaf and flower raw materials and finished products was optimized and validated according to AOAC International guidelines. A two-level partial factorial study was used to guide the optimization of the sample preparation. The optimal conditions were found to be a 60-minute extraction using 50 : 48 : 2 methanol : water : acetic acid followed by a 25-minute separation using a reversed-phased liquid chromatography column with ultraviolet absorbance detection. The single-laboratory validation study evaluated method selectivity, accuracy, repeatability, linearity, limit of quantitation, and limit of detection. Individual flavonoid content ranged from 0.05 mg/g to 17.5 mg/g in solid dosage forms and raw materials. Repeatability ranged from 0.7 to 11.7 % relative standard deviation corresponding to HorRat ranges from 0.2 to 1.6. Calibration curves for each flavonoid were linear within the analytical ranges with correlation coefficients greater than 99.9 %. Herein is the first report of a validated method that is fit for the purpose of quantifying five major phytochemical marker compounds in both raw materials and finished products made from North American ( Crataegus douglasii ) and European ( Crataegus monogyna and Crataegus laevigata) hawthorn species. The method includes optimized extraction of samples without a prolonged drying process and reduced liquid chromatography separation time. Georg Thieme Verlag KG Stuttgart · New York.
2012-01-01
Background A method for assessing the model validity of randomised controlled trials of homeopathy is needed. To date, only conventional standards for assessing intrinsic bias (internal validity) of trials have been invoked, with little recognition of the special characteristics of homeopathy. We aimed to identify relevant judgmental domains to use in assessing the model validity of homeopathic treatment (MVHT). We define MVHT as the extent to which a homeopathic intervention and the main measure of its outcome, as implemented in a randomised controlled trial (RCT), reflect 'state-of-the-art' homeopathic practice. Methods Using an iterative process, an international group of experts developed a set of six judgmental domains, with associated descriptive criteria. The domains address: (I) the rationale for the choice of the particular homeopathic intervention; (II) the homeopathic principles reflected in the intervention; (III) the extent of homeopathic practitioner input; (IV) the nature of the main outcome measure; (V) the capability of the main outcome measure to detect change; (VI) the length of follow-up to the endpoint of the study. Six papers reporting RCTs of homeopathy of varying design were randomly selected from the literature. A standard form was used to record each assessor's independent response per domain, using the optional verdicts 'Yes', 'Unclear', 'No'. Concordance among the eight verdicts per domain, across all six papers, was evaluated using the kappa (κ) statistic. Results The six judgmental domains enabled MVHT to be assessed with 'fair' to 'almost perfect' concordance in each case. For the six RCTs examined, the method allowed MVHT to be classified overall as 'acceptable' in three, 'unclear' in two, and 'inadequate' in one. Conclusion Future systematic reviews of RCTs in homeopathy should adopt the MVHT method as part of a complete appraisal of trial validity. PMID:22510227
Cross validation issues in multiobjective clustering
Brusco, Michael J.; Steinley, Douglas
2018-01-01
The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R
2018-01-01
The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Aase, Audun; Hajdusek, Ondrej; Øines, Øivind; Quarsten, Hanne; Wilhelmsson, Peter; Herstad, Tove K; Kjelland, Vivian; Sima, Radek; Jalovecka, Marie; Lindgren, Per-Eric; Aaberge, Ingeborg S
2016-01-01
A modified microscopy protocol (the LM-method) was used to demonstrate what was interpreted as Borrelia spirochetes and later also Babesia sp., in peripheral blood from patients. The method gained much publicity, but was not validated prior to publication, which became the purpose of this study using appropriate scientific methodology, including a control group. Blood from 21 patients previously interpreted as positive for Borrelia and/or Babesia infection by the LM-method and 41 healthy controls without known history of tick bite were collected, blinded and analysed for these pathogens by microscopy in two laboratories by the LM-method and conventional method, respectively, by PCR methods in five laboratories and by serology in one laboratory. Microscopy by the LM-method identified structures claimed to be Borrelia- and/or Babesia in 66% of the blood samples of the patient group and in 85% in the healthy control group. Microscopy by the conventional method for Babesia only did not identify Babesia in any samples. PCR analysis detected Borrelia DNA in one sample of the patient group and in eight samples of the control group; whereas Babesia DNA was not detected in any of the blood samples using molecular methods. The structures interpreted as Borrelia and Babesia by the LM-method could not be verified by PCR. The method was, thus, falsified. This study underlines the importance of doing proper test validation before new or modified assays are introduced.
Stir bar sorptive extraction of diclofenac from liquid formulations: a proof of concept study.
Kole, Prashant Laxman; Millership, Jeff; McElnay, James C
2011-03-25
A new stir bar sorptive extraction (SBSE) technique coupled with HPLC-UV method for quantification of diclofenac in pharmaceutical formulations has been developed and validated as a proof of concept study. Commercially available polydimethylsiloxane stir bars (Twister™) were used for method development and SBSE extraction (pH, phase ratio, stirring speed, temperature, ionic strength and time) and liquid desorption (solvents, desorption method, stirring time etc) procedures were optimised. The method was validated as per ICH guidelines and was successfully applied for the estimation of diclofenac from three liquid formulations viz. Voltarol(®) Optha single dose eye drops, Voltarol(®) Ophtha multidose eye drops and Voltarol(®) ampoules. The developed method was found to be linear (r=0.9999) over 100-2000ng/ml concentration range with acceptable accuracy and precision (tested over three QC concentrations). The SBSE extraction recovery of the diclofenac was found to be 70% and the LOD and LOQ of the validated method were found to be 16.06 and 48.68ng/ml, respectively. Furthermore, a forced degradation study of a diclofenac formulation leading to the formation of structurally similar cyclic impurity (indolinone) was carried out. The developed extraction method showed comparable results to that of the reference method, i.e. method was capable of selectively extracting the indolinone and diclofenac from the liquid matrix. Data on inter and intra stir bar accuracy and precision further confirmed robustness of the method, supporting the multiple re-use of the stir bars. Copyright © 2010 Elsevier B.V. All rights reserved.
McAteer, Carole Ian; Truong, Nhan-Ai Thi; Aluoch, Josephine; Deathe, Andrew Roland; Nyandiko, Winstone M; Marete, Irene; Vreeman, Rachel Christine
2016-01-01
Introduction HIV-related stigma impacts the quality of life and care management of HIV-infected and HIV-affected individuals, but how we measure stigma and its impact on children and adolescents has less often been described. Methods We conducted a systematic review of studies that measured HIV-related stigma with a quantitative tool in paediatric HIV-infected and HIV-affected populations. Results and discussion Varying measures have been used to assess stigma in paediatric populations, with most studies utilizing the full or variant form of the HIV Stigma Scale that has been validated in adult populations and utilized with paediatric populations in Africa, Asia and the United States. Other common measures included the Perceived Public Stigma Against Children Affected by HIV, primarily utilized and validated in China. Few studies implored item validation techniques with the population of interest, although scales were used in a different cultural context from the origin of the scale. Conclusions Many stigma measures have been used to assess HIV stigma in paediatric populations, globally, but few have implored methods for cultural adaptation and content validity. PMID:27717409
AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J
2017-06-01
To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.
Valdés, Patricio R; Alarcon, Ana M; Munoz, Sergio R
2013-03-01
To generate and validate a scale to measure the Informed Choice of contraceptive methods among women attending a family health care service in Chile. The study follows a multimethod design that combined expert opinions from 13 physicians, 3 focus groups of 21 women each, and a sample survey of 1,446 women. Data analysis consisted of a qualitative text analysis of group interviews, a factor analysis for construct validity, and kappa statistic and Cronbach alpha to assess scale reliability. The instrument comprises 25 items grouped into six categories: information and orientation, quality of treatment, communication, participation in decision making, expression of reproductive rights, and method access and availability. Internal consistency measured with Cronbach alpha ranged from 0.75 to 0.89 for all subscales (kappa, 0.62; standard deviation, 0.06), and construct validity was demonstrated from the testing of several hypotheses. The use of mixed methods contributed to developing a scale of Informed Choice that was culturally appropriate for assessing the women who participated in the family planning program. Copyright © 2013 Elsevier Inc. All rights reserved.
Overby, Nina Cecilie; Johannesen, Elisabeth; Jensen, Grete; Skjaevesland, Anne-Kirsti; Haugen, Margaretha
2014-01-01
The assessment of food intake is challenging and prone to errors; it is therefore important to consider the reliability and validity of the assessment methods. The aim of this study was to analyze the reproducibility and validity of a developed food-frequency questionnaire (FFQ) for use among adolescents. In total, 58 students (aged 13-14) from four different schools in the southern part of Norway participated in the reproducibility study of filling out the FFQ 4 weeks apart. In addition, 93 students participated in the relative validity study where the FFQ was compared to 2×24-hour dietary recalls, while 92 students participated in the absolute validity study where the intakes of fatty acids and vitamin D from the FFQ were compared to fatty acids and 25-hydroxy-vitamin D3 in whole blood. The median Spearman correlation coefficient for all nutrients in the test-retest reliability study was 0.57. The median Spearman correlation for all nutrients in the relative validity study was 0.26, while the correlations coefficients were low in the absolute validity study with n-3 fatty acid coefficients ranging from 0.05 to 0.25, and absent for vitamin D (r=0.000). The test-retest reproducibility was considered good, the relative validity was considered poor to good, and the absolute validity was considered poor. However, the results are comparable to other studies among adolescents.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A survey on sleep assessment methods
Silva, Josep; Cauli, Omar
2018-01-01
Purpose A literature review is presented that aims to summarize and compare current methods to evaluate sleep. Methods Current sleep assessment methods have been classified according to different criteria; e.g., objective (polysomnography, actigraphy…) vs. subjective (sleep questionnaires, diaries…), contact vs. contactless devices, and need for medical assistance vs. self-assessment. A comparison of validation studies is carried out for each method, identifying their sensitivity and specificity reported in the literature. Finally, the state of the market has also been reviewed with respect to customers’ opinions about current sleep apps. Results A taxonomy that classifies the sleep detection methods. A description of each method that includes the tendencies of their underlying technologies analyzed in accordance with the literature. A comparison in terms of precision of existing validation studies and reports. Discussion In order of accuracy, sleep detection methods may be arranged as follows: Questionnaire < Sleep diary < Contactless devices < Contact devices < Polysomnography A literature review suggests that current subjective methods present a sensitivity between 73% and 97.7%, while their specificity ranges in the interval 50%–96%. Objective methods such as actigraphy present a sensibility higher than 90%. However, their specificity is low compared to their sensitivity, being one of the limitations of such technology. Moreover, there are other factors, such as the patient’s perception of her or his sleep, that can be provided only by subjective methods. Therefore, sleep detection methods should be combined to produce a synergy between objective and subjective methods. The review of the market indicates the most valued sleep apps, but it also identifies problems and gaps, e.g., many hardware devices have not been validated and (especially software apps) should be studied before their clinical use. PMID:29844990
Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L
2015-10-15
Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M
2015-03-01
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Automatic Generation of Validated Specific Epitope Sets.
Carrasco Pro, Sebastian; Sidney, John; Paul, Sinu; Lindestam Arlehamn, Cecilia; Weiskopf, Daniela; Peters, Bjoern; Sette, Alessandro
2015-01-01
Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB) and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue) were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.
Casartelli, Nicola; Müller, Roland; Maffiuletti, Nicola A
2010-11-01
The aim of the present study was to verify the validity and reliability of the Myotest accelerometric system (Myotest SA, Sion, Switzerland) for the assessment of vertical jump height. Forty-four male basketball players (age range: 9-25 years) performed series of squat, countermovement and repeated jumps during 2 identical test sessions separated by 2-15 days. Flight height was simultaneously quantified with the Myotest system and validated photoelectric cells (Optojump). Two calculation methods were used to estimate the jump height from Myotest recordings: flight time (Myotest-T) and vertical takeoff velocity (Myotest-V). Concurrent validity was investigated comparing Myotest-T and Myotest-V to the criterion method (Optojump), and test-retest reliability was also examined. As regards validity, Myotest-T overestimated jumping height compared to Optojump (p < 0.001) with a systematic bias of approximately 7 cm, even though random errors were low (2.7 cm) and intraclass correlation coefficients (ICCs) where high (>0.98), that is, excellent validity. Myotest-V overestimated jumping height compared to Optojump (p < 0.001), with high random errors (>12 cm), high limits of agreement ratios (>36%), and low ICCs (<0.75), that is, poor validity. As regards reliability, Myotest-T showed high ICCs (range: 0.92-0.96), whereas Myotest-V showed low ICCs (range: 0.56-0.89), and high random errors (>9 cm). In conclusion, Myotest-T is a valid and reliable method for the assessment of vertical jump height, and its use is legitimate for field-based evaluations, whereas Myotest-V is neither valid nor reliable.
Fazio, Tatiana Tatit; Singh, Anil Kumar; Kedor-Hackmann, Erika Rosa Maria; Santoro, Maria Inês Rocha Miritello
2007-03-12
Cleaning validation is an integral part of current good manufacturing practices in any pharmaceutical industry. Nowadays, azathioprine and several other pharmacologically potent pharmaceuticals are manufactured in same production area. Carefully designed cleaning validation and its evaluation can ensure that residues of azathioprine will not carry over and cross contaminate the subsequent product. The aim of this study was to validate simple analytical method for verification of residual azathioprine in equipments used in the production area and to confirm efficiency of cleaning procedure. The HPLC method was validated on a LC system using Nova-Pak C18 (3.9 mm x 150 mm, 4 microm) and methanol-water-acetic acid (20:80:1, v/v/v) as mobile phase at a flow rate of 1.0 mL min(-1). UV detection was made at 280 nm. The calibration curve was linear over a concentration range from 2.0 to 22.0 microg mL(-1) with a correlation coefficient of 0.9998. The detection limit (DL) and quantitation limit (QL) were 0.09 and 0.29 microg mL(-1), respectively. The intra-day and inter-day precision expressed as relative standard deviation (R.S.D.) were below 2.0%. The mean recovery of method was 99.19%. The mean extraction-recovery from manufacturing equipments was 83.5%. The developed UV spectrophotometric method could only be used as limit method to qualify or reject cleaning procedure in production area. Nevertheless, the simplicity of spectrophotometric method makes it useful for routine analysis of azathioprine residues on cleaned surface and as an alternative to proposed HPLC method.
Plappert-Helbig, Ulla; Junker-Walker, Ursula; Martus, Hans-Joerg
2015-07-01
As a part of the Japanese Center for the Validation of Alternative Methods (JaCVAM)-initiative international validation study of the in vivo rat alkaline comet assay (comet assay), we examined methyl methanesulfonate, 2,6-diaminotoluene, and 5-fluorouracil under coded test conditions. Rats were treated orally with the maximum tolerated dose (MTD) and two additional descending doses of the respective compounds. In the MMS treated groups liver and stomach showed significantly elevated DNA damage at each dose level and a significant dose-response relationship. 2,6-diaminotoluene induced significantly elevated DNA damage in the liver at each dose and a statistically significant dose-response relationship whereas no DNA damage was obtained in the stomach. 5-fluorouracil did not induce DNA damage in either liver or stomach. Copyright © 2015 Elsevier B.V. All rights reserved.
Iqbal, Muzaffar; Ezzeldin, Essam; Rezk, Naser L; Bajrai, Amal A; Al-Rashood, Khalid A
2018-04-25
The purpose of this study was development, validation and application of ultra-performance liquid chromatography (UPLC)-ESI-MS/MS method for quantitation of flibanserin in plasma samples. After extraction of analyte from plasma by diethyl ether, separation was performed on UPLC C 18 column using mobile phase composition of 10 mM ammonium formate-acetonitrile (30:70, v/v) by isocratic elution of 0.3 ml/min. The multiple reaction monitoring transitions of m/z 391.13→ 161.04 and 384.20→ 253.06 were used for detection of analyte and internal standard (quetiapine), respectively. The calibration curves were linear (r ≥0.995) between 0.22 and 555 ng/ml concentration and all validation results were within the acceptable range as per US FDA guidelines. The assay procedure was fully validated and successfully applied in pharmacokinetic interaction study of flibanserin with bosentan in rats.
Sánchez, Raquel; Snell, James; Held, Andrea; Emons, Hendrik
2015-08-01
A simple, robust and reliable method for mercury determination in seawater matrices based on the combination of cold vapour generation and inductively coupled plasma mass spectrometry (CV-ICP-MS) and its complete in-house validation are described. The method validation covers parameters such as linearity, limit of detection (LOD), limit of quantification (LOQ), trueness, repeatability, intermediate precision and robustness. A calibration curve covering the whole working range was achieved with coefficients of determination typically higher than 0.9992. The repeatability of the method (RSDrep) was 0.5 %, and the intermediate precision was 2.3 % at the target mass fraction of 20 ng/kg. Moreover, the method was robust with respect to the salinity of the seawater. The limit of quantification was 2.7 ng/kg, which corresponds to 13.5 % of the target mass fraction in the future certified reference material (20 ng/kg). An uncertainty budget for the measurement of mercury in seawater has been established. The relative expanded (k = 2) combined uncertainty is 6 %. The performance of the validated method was demonstrated by generating results for process control and a homogeneity study for the production of a candidate certified reference material.
Wallston, Kenneth A.; Wilkins, Consuelo H.; Hull, Pamela C.; Miller, Stephania T.
2015-01-01
Abstract Objective This study describes the development and psychometric evaluation of HPV Clinical Trial Survey for Parents with Children Aged 9 to 15 (CTSP‐HPV) using traditional instrument development methods and community engagement principles. Methods An expert panel and parental input informed survey content and parents recommended study design changes (e.g., flyer wording). A convenience sample of 256 parents completed the final survey measuring parental willingness to consent to HPV clinical trial (CT) participation and other factors hypothesized to influence willingness (e.g., HPV vaccine benefits). Cronbach's a, Spearman correlations, and multiple linear regression were used to estimate internal consistency, convergent and discriminant validity, and predictively validity, respectively. Results Internal reliability was confirmed for all scales (a ≥ 0.70.). Parental willingness was positively associated (p < 0.05) with trust in medical researchers, adolescent CT knowledge, HPV vaccine benefits, advantages of adolescent CTs (r range 0.33–0.42), supporting convergent validity. Moderate discriminant construct validity was also demonstrated. Regression results indicate reasonable predictive validity with the six scales accounting for 31% of the variance in parents’ willingness. Conclusions This instrument can inform interventions based on factors that influence parental willingness, which may lead to the eventual increase in trial participation. Further psychometric testing is warranted. PMID:26530324
Identifying and classifying hyperostosis frontalis interna via computerized tomography.
May, Hila; Peled, Nathan; Dar, Gali; Hay, Ori; Abbas, Janan; Masharawi, Youssef; Hershkovitz, Israel
2010-12-01
The aim of this study was to recognize the radiological characteristics of hyperostosis frontalis interna (HFI) and to establish a valid and reliable method for its identification and classification. A reliability test was carried out on 27 individuals who had undergone a head computerized tomography (CT) scan. Intra-observer reliability was obtained by examining the images three times, by the same researcher, with a 2-week interval between each sample ranking. The inter-observer test was performed by three independent researchers. A validity test was carried out using two methods for identifying and classifying HFI: 46 cadaver skullcaps were ranked twice via computerized tomography scans and then by direct observation. Reliability and validity were calculated using Kappa test (SPSS 15.0). Reliability tests of ranking HFI via CT scans demonstrated good results (K > 0.7). As for validity, a very good consensus was obtained between the CT and direct observation, when moderate and advanced types of HFI were present (K = 0.82). The suggested classification method for HFI, using CT, demonstrated a sensitivity of 84%, specificity of 90.5%, and positive predictive value of 91.3%. In conclusion, volume rendering is a reliable and valid tool for identifying HFI. The suggested three-scale classification is most suitable for radiological diagnosis of the phenomena. Considering the increasing awareness of HFI as an early indicator of a developing malady, this study may assist radiologists in identifying and classifying the phenomena.
NASA Astrophysics Data System (ADS)
Chang, Q.; Jiao, W.
2017-12-01
Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.
Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.
Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad
2016-07-15
The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Background Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. Methods A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland–Altman limits of agreement. In total, 31 male and female volunteers aged 9–89 participated in the study. Results We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. Conclusion The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys. PMID:27585631
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Background Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. Methods A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland-Altman limits of agreement. In total, 31 male and female volunteers aged 9-89 participated in the study. Results We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. Conclusion The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Audemard, Corinne; Kator, Howard I; Reece, Kimberly S
2018-08-20
High salinity relay of Eastern oysters (Crassostrea virginica) was evaluated as a post-harvest processing (PHP) method for reducing Vibrio vulnificus. This approach relies on the exposure of oysters to natural high salinity waters and preserves a live product compared to previously approved PHPs. Although results of prior studies evaluating high salinity relay as a means to decrease V. vulnificus levels were promising, validation of this method as a PHP following approved guidelines is required. This study was designed to provide data for validation of this method following Food and Drug Administration (FDA) PHP validation guidelines. During each of 3 relay experiments, oysters cultured from 3 different Chesapeake Bay sites of contrasting salinities (10-21 psu) were relayed without acclimation to high salinity waters (31-33 psu) for up to 28 days. Densities of V. vulnificus and densities of total and pathogenic Vibrio parahaemolyticus (as tdh positive strains) were measured using an MPN-quantitative PCR approach. Overall, 9 lots of oysters were relayed with 6 exhibiting initial V. vulnificus >10,000/g. As recommended by the FDA PHP validation guidelines, these lots reached both the 3.52 log reduction and the <30 MPN/g densities requirements for V. vulnificus after 14 to 28 days of relay. Densities of total and pathogenic V. parahaemolyticus in relayed oysters were significantly lower than densities at the sites of origin suggesting an additional benefit associated with high salinity relay. While relay did not have a detrimental effect on oyster condition, oyster mortality levels ranged from 2 to 61% after 28 days of relay. Although the identification of the factors implicated in oyster mortality will require further examination, this study strongly supports the validation of high salinity relay as an effective PHP method to reduce levels of V. vulnificus in oysters to endpoint levels approved for human consumption. Copyright © 2018 Elsevier B.V. All rights reserved.
Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de
2013-01-01
Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study.
Singh, Sheelendra Pratap; Dwivedi, Nistha; Raju, Kanumuri Siva Rama; Taneja, Isha; Wahajuddin, Mohammad
2016-04-01
United States Environmental Protection Agency has recommended estimating pyrethroids' risk using cumulative exposure. For cumulative risk assessment, it would be useful to have a bioanalytical method for quantification of one or several pyrethroids simultaneously in a small sample volume to support toxicokinetic studies. Therefore, in the present study, a simple, sensitive and high-throughput ultraperformance liquid chromatography-tandem mass spectrometry method was developed and validated for simultaneous analysis of seven pyrethroids (fenvalerate, fenpropathrin, bifenthrin, lambda-cyhalothrin, cyfluthrin, cypermethrin and deltamethrin) in 100 µL of rat plasma. A simple single-step protein precipitation method was used for the extraction of target compounds. The total chromatographic run time of the method was 5 min. The chromatographic system used a Supelco C18 column and isocratic elution with a mobile phase consisting of methanol and 5 mM ammonium formate in the ratio of 90 : 10 (v/v). Mass spectrometer (API 4000) was operated in multiple reaction monitoring positive-ion mode using the electrospray ionization technique. The calibration curves were linear in the range of 7.8-2,000 ng/mL with correlation coefficients of ≥ 0.99. All validation parameters such as precision, accuracy, recovery, matrix effect and stability met the acceptance criteria according to the regulatory guidelines. The method was successfully applied to the toxicokinetic study of cypermethrin in rats. To the best of our knowledge, this is the first LC-MS-MS method for the simultaneous analysis of pyrethroids in rat plasma. This validated method with minimal modification can also be utilized for forensic and clinical toxicological applications due to its simplicity, sensitivity and rapidity. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.
Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy
2010-02-01
This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.
76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 63 [OAR-2004-0080, FRL-9306-8] RIN 2060-AF00 Method 301--Field Validation of Pollutant Measurement Methods From Various Waste Media AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: This action amends EPA's Method 301, Field Validation...
Woldetsadik, Mahlet Atakilt; Goggin, Kathy; Staggs, Vincent S; Wanyenze, Rhoda K; Beyeza-Kashesya, Jolly; Mindry, Deborah; Finocchario-Kessler, Sarah; Khanakwa, Sarah; Wagner, Glenn J
2016-06-01
With data from 400 HIV clients with fertility intentions and 57 HIV providers in Uganda, we evaluated the psychometrics of new client and provider scales measuring constructs related to safer conception methods (SCM) and safer conception counselling (SCC). Several forms of validity (i.e., content, face, and construct validity) were examined using standard methods including exploratory and confirmatory factor analysis. Internal consistency was established using Cronbach's alpha correlation coefficient. The final scales consisted of measures of attitudes towards use of SCM and delivery of SCC, including measures of self-efficacy and motivation to use SCM, and perceived community stigma towards childbearing. Most client and all provider measures had moderate to high internal consistency (alphas 0.60-0.94), most had convergent validity (associations with other SCM or SCC-related measures), and client measures had divergent validity (poor associations with depression). These findings establish preliminary psychometric properties of these scales and should facilitate future studies of SCM and SCC.
Sakai, Shinobu; Adachi, Reiko; Akiyama, Hiroshi; Teshima, Reiko
2013-06-19
A labeling system for food allergenic ingredients was established in Japan in April 2002. To monitor the labeling, the Japanese government announced official methods for detecting allergens in processed foods in November 2002. The official methods consist of quantitative screening tests using enzyme-linked immunosorbent assays (ELISAs) and qualitative confirmation tests using Western blotting or polymerase chain reactions (PCR). In addition, the Japanese government designated 10 μg protein/g food (the corresponding allergenic ingredient soluble protein weight/food weight), determined by ELISA, as the labeling threshold. To standardize the official methods, the criteria for the validation protocol were described in the official guidelines. This paper, which was presented at the Advances in Food Allergen Detection Symposium, ACS National Meeting and Expo, San Diego, CA, Spring 2012, describes the validation protocol outlined in the official Japanese guidelines, the results of interlaboratory studies for the quantitative detection method (ELISA for crustacean proteins) and the qualitative detection method (PCR for shrimp and crab DNAs), and the reliability of the detection methods.
Landscape scale estimation of soil carbon stock using 3D modelling.
Veronesi, F; Corstanje, R; Mayr, T
2014-07-15
Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. Copyright © 2014 Elsevier B.V. All rights reserved.
Kramers, Cornelis; Derijks, Hieronymus J.; Wensing, Michel; Wetzels, Jack F. M.
2015-01-01
Background The Modification of Diet in Renal Disease (MDRD) formula is widely used in clinical practice to assess the correct drug dose. This formula is based on serum creatinine levels which might be influenced by chronic diseases itself or the effects of the chronic diseases. We conducted a systematic review to determine the validity of the MDRD formula in specific patient populations with renal impairment: elderly, hospitalized and obese patients, patients with cardiovascular disease, cancer, chronic respiratory diseases, diabetes mellitus, liver cirrhosis and human immunodeficiency virus. Methods and Findings We searched for articles in Pubmed published from January 1999 through January 2014. Selection criteria were (1) patients with a glomerular filtration rate (GFR) < 60 ml/min (/1.73m2), (2) MDRD formula compared with a gold standard and (3) statistical analysis focused on bias, precision and/or accuracy. Data extraction was done by the first author and checked by a second author. A bias of 20% or less, a precision of 30% or less and an accuracy expressed as P30% of 80% or higher were indicators of the validity of the MDRD formula. In total we included 27 studies. The number of patients included ranged from 8 to 1831. The gold standard and measurement method used varied across the studies. For none of the specific patient populations the studies provided sufficient evidence of validity of the MDRD formula regarding the three parameters. For patients with diabetes mellitus and liver cirrhosis, hospitalized patients and elderly with moderate to severe renal impairment we concluded that the MDRD formula is not valid. Limitations of the review are the lack of considering the method of measuring serum creatinine levels and the type of gold standard used. Conclusion In several specific patient populations with renal impairment the use of the MDRD formula is not valid or has uncertain validity. PMID:25741695
ERIC Educational Resources Information Center
Ratti, Victoria; Vickerstaff, Victoria; Crabtree, Jason; Hassiotis, Angela
2017-01-01
Introduction: The Resident Choice Assessment Scale (RCAS) is used to assess choice availability for adults with intellectual disabilities (ID). The aim of the study was to explore the factor structure, construct validity, and internal consistency of the measure in community settings to further validate this tool. Method: 108 paid carers of adults…
Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025
NASA Astrophysics Data System (ADS)
Banegas, J. M.; Orué, M. W.
2016-07-01
Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.
New Sentinel-2 radiometric validation approaches (SEOM program)
NASA Astrophysics Data System (ADS)
Bruniquel, Véronique; Lamquin, Nicolas; Ferron, Stéphane; Govaerts, Yves; Woolliams, Emma; Dilo, Arta; Gascon, Ferran
2016-04-01
SEOM is an ESA program element whose one of the objectives aims at launching state-of-the-art studies for the scientific exploitation of operational missions. In the frame of this program, ESA awarded ACRI-ST and its partners Rayference and National Physical Laboratory (NPL) early 2016 for a R&D study on the development and intercomparison of algorithms for validating the Sentinel-2 radiometric L1 data products beyond the baseline algorithms used operationally in the frame of the S2 Mission Performance Centre. In this context, several algorithms have been proposed and are currently in development: The first one is based on the exploitation of Deep Convective Cloud (DCC) observations over ocean. This method allows an inter-band radiometry validation from the blue to the NIR (typically from B1 to B8a) from a reference band already validated for example with the well-known Rayleigh method. Due to their physical properties, DCCs appear from the remote sensing point of view to have bright and cold tops and they can be used as invariant targets to monitor the radiometric response degradation of reflective solar bands. The DCC approach is statistical i.e. the method shall be applied on a large number of measurements to derive reliable statistics and decrease the impact of the perturbing contributors. The second radiometric validation method is based on the exploitation of matchups combining both concomitant in-situ measurements and Sentinel-2 observations. The in-situ measurements which are used here correspond to measurements acquired in the frame of the RadCalNet networks. The validation is performed for the Sentinel-2 bands similar to the bands of the instruments equipping the validation site. The measurements from the Cimel CE 318 12-filters BRDF Sun Photometer installed recently in the Gobabeb site near the Namib desert are used for this method. A comprehensive verification of the calibration requires an analysis of MSI radiances over the full dynamic range, including low radiances, as extreme values are more subject to instrument response non-linearity. The third method developed in the frame of this project aims to address this point. It is based on a comparison of Sentinel-2 observations over coastal waters which have low radiometry and corresponding Radiative Transfer (RT) simulations using AERONET-OC measurements. Finally, a last method is developed using RadCalNet measurements and Sentinel-2 observations to validate the radiometry of mid/low resolution sensors such as Sentinel-3/OLCI. The RadCalNet measurements are transferred from the RadCalNet sites to Pseudo Invariant Calibration Sites (PICS) using Sentinel-2, and then these larger sites are used to validate mid- and low-resolution sensors to the RadCalNet reference. For all the developed methods, an uncertainty budget is derived following QA4EO guidelines. A last step of this ESA project is dedicated to an Inter-comparison Workshop open to entities involved in Sentinel-2 radiometric validation activities. Blind inter-comparison tests over a series of images will be proposed and the results will be discussed during the workshop.
Clarsen, Benjamin; Myklebust, Grethe; Bahr, Roald
2013-05-01
Current methods for injury registration in sports injury epidemiology studies may substantially underestimate the true burden of overuse injuries due to a reliance on time-loss injury definitions. To develop and validate a new method for the registration of overuse injuries in sports. A new method, including a new overuse injury questionnaire, was developed and validated in a 13-week prospective study of injuries among 313 athletes from five different sports, cross-country skiing, floorball, handball, road cycling and volleyball. All athletes completed a questionnaire by email each week to register problems in the knee, lower back and shoulder. Standard injury registration methods were also used to record all time-loss injuries that occurred during the study period. The new method recorded 419 overuse problems in the knee, lower back and shoulder during the 3-month-study period. Of these, 142 were classified as substantial overuse problems, defined as those leading to moderate or severe reductions in sports performance or participation, or time loss. Each week, an average of 39% of athletes reported having overuse problems and 13% reported having substantial problems. In contrast, standard methods of injury registration registered only 40 overuse injuries located in the same anatomical areas, the majority of which were of minimal or mild severity. Standard injury surveillance methods only capture a small percentage of the overuse problems affecting the athletes, largely because few problems led to time loss from training or competition. The new method captured a more complete and nuanced picture of the burden of overuse injuries in this cohort.
Coussot, Gaëlle; Le Postollec, Aurélie; Faye, Clément; Dobrijevic, Michel
2018-04-15
The scope of this paper is to present a gold standard method to evaluate functional activity of antibody (Ab)-based materials during the different phases of their development, after their exposure to forced degradations or even during routine quality control. Ab-based materials play a central role in the development of diagnostic devices, for example, for screening or therapeutic target characterization, in formulation development, and in novel micro(nano)technology approaches to develop immunosensors useful for the analysis of trace substances in pharmaceutical and food industries, clinical and environmental fields. A very important aspect in diagnostic device development is the construction of its biofunctional surfaces. These Ab surfaces require biocompatibility, homogeneity, stability, specificity and functionality. Thus, this work describes the validation and applications of a unique ligand binding assay to directly perform the quantitative measurement of functional Ab binding sites immobilized on the solid surfaces. The method called Antibody Anti-HorseRadish Peroxidase (A2HRP) method, uses a covalently coated anti-HRP antibody (anti-HRP Ab) and does not need for a secondary Ab during the detection step. The A2HRP method was validated and gave reliable results over a wide range of absorbance values. Analyzed validation criteria were fulfilled as requested by the food and drug administration (FDA) and European Medicines Agency (EMA) guidance for the validation of bioanalytical methods with 1) an accuracy mean value within +15% of the nominal value; 2) the within-assay precision less than 7.1%, and 3) the inter-day variability under 12.1%. With the A2HRP method, it is then possible to quantify from 0.04 × 10 12 to 2.98 × 10 12 functional Ab binding sites immobilized on the solid surfaces. A2HRP method was validated according to FDA and EMA guidance, allowing the creation of a gold standard method to evaluate Ab surfaces for their resistance under laboratory constraints. Stability testing was described through forced degradation studies after exposure of Ab-surfaces to storage, pH and aqueous-organic solvent mixture stresses. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
Haddad, Monoem; Chaouachi, Anis; Castagna, Carlo; Wong, Del P; Chamari, Karim
2012-01-01
Various studies used objective heart rate (HR)-based methods to assess training load (TL). The common methods were Banister's Training Impulse (TRIMP; weights the duration using a weighting factor) and Edwards' TL (a summated HR zone score). Both the methods use the direct physiological measure of HR as a fundamental part of the calculation. To eliminate the redundancy of using various methods to quantify the same construct (i.e., TL), we have to verify if these methods are strongly convergent and are interchangeable. Therefore, the aim of this study was to investigate the convergent validity between Banister's TRIMP and Edwards' TL used for the assessment of internal TL. The HRs were recorded and analyzed during 10 training weeks of the preseason period in 10 male Taekwondo (TKD) athletes. The TL was calculated using Banister's TRIMP and Edwards' TL. Pearson product moment correlation coefficient was used to evaluate the convergent validity between the 2 methods for assessing TL. Very large to nearly perfect relationships were found between individual Banister's TRIMP and Edwards' TL (r values from 0.80 to 0.99; p < 0.001). Pooled Banister's TRIMP and pooled Edwards' TL (pooled data n = 284) were nearly largely correlated (r = 0.89; p < 0.05; 95% confidence interval: 0.86-0.91). In conclusion, these findings suggest that these 2 objective methods, measuring a similar construct, are interchangeable.
Experimental Validation of Normalized Uniform Load Surface Curvature Method for Damage Localization
Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo
2015-01-01
In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise. PMID:26501286
Validity of Willingness to Pay Measures under Preference Uncertainty.
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.
Multimethod Investigation of Interpersonal Functioning in Borderline Personality Disorder
Stepp, Stephanie D.; Hallquist, Michael N.; Morse, Jennifer Q.; Pilkonis, Paul A.
2011-01-01
Even though interpersonal functioning is of great clinical importance for patients with borderline personality disorder (BPD), the comparative validity of different assessment methods for interpersonal dysfunction has not yet been tested. This study examined multiple methods of assessing interpersonal functioning, including self- and other-reports, clinical ratings, electronic diaries, and social cognitions in three groups of psychiatric patients (N=138): patients with (1) BPD, (2) another personality disorder, and (3) Axis I psychopathology only. Using dominance analysis, we examined the predictive validity of each method in detecting changes in symptom distress and social functioning six months later. Across multiple methods, the BPD group often reported higher interpersonal dysfunction scores compared to other groups. Predictive validity results demonstrated that self-report and electronic diary ratings were the most important predictors of distress and social functioning. Our findings suggest that self-report scores and electronic diary ratings have high clinical utility, as these methods appear most sensitive to change. PMID:21808661
Validity of Willingness to Pay Measures under Preference Uncertainty
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method. PMID:27096163
Chen, Yang; Reddy, Ravinder M; Li, Wenjing; Yettlla, Ramesh R; Lopez, Salvador; Woodman, Michael
2015-01-01
An HPLC method for simultaneous determination of vitamins A and D3 in fluid milk was developed and validated. Saponification and extraction conditions were studied for optimum recovery and simplicity. An RP HPLC system equipped with a C18 column and diode array detector was used for quantitation. The method was subjected to a single-laboratory validation using skim, 2% fat, and whole milk samples at concentrations of 50, 100, and 200% of the recommended fortification levels for vitamins A and D3 for Grade "A" fluid milk. The method quantitation limits for vitamins A and D3 were 0.0072 and 0.0026 μg/mL, respectively. Average recoveries between 94 and 110% and SD values ranging from 2.7 to 6.9% were obtained for both vitamins A and D3. The accuracy of the method was evaluated using a National Institute of Standards and Technology standard reference material (1849a) and proficiency test samples.
Stremel, Tatiana R De O; Domingues, Cinthia E; Zittel, Rosimara; Silva, Cleber P; Weinert, Patricia L; Monteiro, Franciele C; Campos, Sandro X
2018-04-03
This study aims to develop and validate a method to determine OCPs in fish tissues, minimizing the consumption of sample and reagents, by using a modified QuEChERS along with ultrasound, d-SPE and gas chromatography with an electron capture detector (GC-ECD), refraining the pooling. Different factorial designs were employed to optimize the sample preparation phase. The validation method presented a recovery of around 77.3% and 110.8%, with RSD lower than 13% and the detection limits were between 0.24 and 2.88 μgkg -1 , revealing good sensitiveness and accuracy. The method was satisfactorily applied to the analysis of tissues from different species of fish and OCPs residues were detected. The proposed method was shown effective to determine OCPs low concentrations in fish tissues, using small sample mass (0.5 g), making the sample analyses viable without the need for grouping (pool).
Eticha, Tadele; Kahsay, Getu; Hailu, Teklebrhan; Gebretsadikan, Tesfamichael; Asefa, Fitsum; Gebretsadik, Hailekiros; Thangabalan, Boovizhikannan
2018-01-01
A simple extractive spectrophotometric technique has been developed and validated for the determination of miconazole nitrate in pure and pharmaceutical formulations. The method is based on the formation of a chloroform-soluble ion-pair complex between the drug and bromocresol green (BCG) dye in an acidic medium. The complex showed absorption maxima at 422 nm, and the system obeys Beer's law in the concentration range of 1-30 µ g/mL with molar absorptivity of 2.285 × 10 4 L/mol/cm. The composition of the complex was studied by Job's method of continuous variation, and the results revealed that the mole ratio of drug : BCG is 1 : 1. Full factorial design was used to optimize the effect of variable factors, and the method was validated based on the ICH guidelines. The method was applied for the determination of miconazole nitrate in real samples.
A Validation Study of the Revised Personal Safety Decision Scale
ERIC Educational Resources Information Center
Kim, HaeJung; Hopkins, Karen M.
2017-01-01
Objective: The purpose of this study is to examine the reliability and validity of an 11-item Personal Safety Decision Scale (PSDS) in a sample of child welfare workers. Methods: Data were derived from a larger cross-sectional online survey to a random stratified sample of 477 public child welfare workers in a mid-Atlantic State. An exploratory…
ERIC Educational Resources Information Center
Naji Qasem, Mamun Ali; Ahmad Gul, Showkeen Bilal
2014-01-01
The study was conducted to know the effect of items direction (positive or negative) on the factorial construction and criterion related validity in Likert scale. The descriptive survey research method was used for the study and the sample consisted of 510 undergraduate students selected by used random sampling technique. A scale developed by…
ERIC Educational Resources Information Center
McDougall, Christie M.
2013-01-01
The purpose of the mixed methods study was to develop and validate the CSIS-360, a 360-degree feedback assessment to measure competencies of school improvement specialists from multiple perspectives. The study consisted of eight practicing school improvement specialists from a variety of settings. The specialists nominated 23 constituents to…
ERIC Educational Resources Information Center
Longford, Nicholas T.
Operational procedures for the Graduate Record Examinations Validity Study Service are reviewed, with emphasis on the problem of frequent occurrence of negative coefficients in the fitted within-department regressions obtained by the empirical Bayes method of H. I. Braun and D. Jones (1985). Several alterations of the operational procedures are…
Frost, Rachael; Levati, Sara; McClurg, Doreen; Brady, Marian; Williams, Brian
2017-06-01
To systematically review methods for measuring adherence used in home-based rehabilitation trials and to evaluate their validity, reliability, and acceptability. In phase 1 we searched the CENTRAL database, NHS Economic Evaluation Database, and Health Technology Assessment Database (January 2000 to April 2013) to identify adherence measures used in randomized controlled trials of allied health professional home-based rehabilitation interventions. In phase 2 we searched the databases of MEDLINE, Embase, CINAHL, Allied and Complementary Medicine Database, PsycINFO, CENTRAL, ProQuest Nursing and Allied Health, and Web of Science (inception to April 2015) for measurement property assessments for each measure. Studies assessing the validity, reliability, or acceptability of adherence measures. Two reviewers independently extracted data on participant and measure characteristics, measurement properties evaluated, evaluation methods, and outcome statistics and assessed study quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist. In phase 1 we included 8 adherence measures (56 trials). In phase 2, from the 222 measurement property assessments identified in 109 studies, 22 high-quality measurement property assessments were narratively synthesized. Low-quality studies were used as supporting data. StepWatch Activity Monitor validly and acceptably measured short-term step count adherence. The Problematic Experiences of Therapy Scale validly and reliably assessed adherence to vestibular rehabilitation exercises. Adherence diaries had moderately high validity and acceptability across limited populations. The Borg 6 to 20 scale, Bassett and Prapavessis scale, and Yamax CW series had insufficient validity. Low-quality evidence supported use of the Joint Protection Behaviour Assessment. Polar A1 series heart monitors were considered acceptable by 1 study. Current rehabilitation adherence measures are limited. Some possess promising validity and acceptability for certain parameters of adherence, situations, and populations and should be used in these situations. Rigorous evaluation of adherence measures in a broader range of populations is needed. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Ouyang, Hui; Guo, Yicheng; He, Mingzhen; Zhang, Jinlian; Huang, Xiaofang; Zhou, Xin; Jiang, Hongliang; Feng, Yulin; Yang, Shilin
2015-03-01
A simple, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for the determination of Pulsatilla saponin D, a potential antitumor constituent isolated from Pulsatilla chinensis in rat plasma. Rat plasma samples were pretreated by protein precipitation with methanol. The method validation was performed in accordance with US Food and Drug Administration guidelines and the results met the acceptance criteria. The method was successfully applied to assess the pharmacokinetics and oral bioavailability of Pulsatilla saponin D in rats. Copyright © 2014 John Wiley & Sons, Ltd.
Validation of catchment models for predicting land-use and climate change impacts. 1. Method
NASA Astrophysics Data System (ADS)
Ewen, J.; Parkin, G.
1996-02-01
Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).
Propagation of sound in turbulent media
NASA Technical Reports Server (NTRS)
Wenzel, A. R.
1976-01-01
Perturbation methods commonly used to study the propagation of acoustic waves in turbulent media are reviewed. Emphasis is on those techniques which are applicable to problems involving long-range propagation in the atmosphere and ocean. Characteristic features of the various methods are illustrated by applying them to particular problems. It is shown that conventional perturbation techniques, such as the Born approximation, yield solutions which contain secular terms, and which therefore have a relatively limited range of validity. In contrast, it is found that solutions obtained with the aid of the Rytov method or the smoothing method do not contain secular terms, and consequently have a much greater range of validity.
Study on evaluation methods for Rayleigh wave dispersion characteristic
Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.
2005-01-01
The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.
Development and Validation of Coaches' Interpersonal Style Questionnaire
ERIC Educational Resources Information Center
Pulido, Juan J.; Sánchez-Oliva, David; Leo, Francisco M.; Sánchez-Cano, Jorge; García-Calvo, Tomás
2018-01-01
Purpose: The objectives were to develop and validate the Coaches' Interpersonal Style Questionnaire. The Coaches' Interpersonal Style Questionnaire analyzes the interpersonal style adopted by coaches when implementing their strategy of supporting or thwarting athletes' basic psychological needs. Method: In Study 1, an exploratory factor analysis…
A Validation Study of the "School Leader Dispositions Inventory"[C
ERIC Educational Resources Information Center
Melton, Teri Denlea; Tysinger, Dawn; Mallory, Barbara; Green, James
2011-01-01
Although university-based school administrator preparation programs are required by accreditation agencies to assess the dispositions of candidates, valid and reliable methods for doing so remain scarce. "The School Leaders Disposition Inventory"[C] (SDLI) is proposed as an instrument that has promise for identifying leadership…
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland-Altman limits of agreement. In total, 31 male and female volunteers aged 9-89 participated in the study. We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.
2014-01-01
Background Patient-reported outcome validation needs to achieve validity and reliability standards. Among reliability analysis parameters, test-retest reliability is an important psychometric property. Retested patients must be in a clinically stable condition. This is particularly problematic in palliative care (PC) settings because advanced cancer patients are prone to a faster rate of clinical deterioration. The aim of this study was to evaluate the methods by which multi-symptom and health-related qualities of life (HRQoL) based on patient-reported outcomes (PROs) have been validated in oncological PC settings with regards to test-retest reliability. Methods A systematic search of PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013), and specific PRO databases was performed. Studies were included if they described a set of validation studies. Studies were included if they described a set of validation studies for an instrument developed to measure multi-symptom or multidimensional HRQoL in advanced cancer patients under PC. The COSMIN checklist was used to rate the methodological quality of the study designs. Results We identified 89 validation studies from 746 potentially relevant articles. From those 89 articles, 31 measured test-retest reliability and were included in this review. Upon critical analysis of the overall quality of the criteria used to determine the test-retest reliability, 6 (19.4%), 17 (54.8%), and 8 (25.8%) of these articles were rated as good, fair, or poor, respectively, and no article was classified as excellent. Multi-symptom instruments were retested over a shortened interval when compared to the HRQoL instruments (median values 24 hours and 168 hours, respectively; p = 0.001). Validation studies that included objective confirmation of clinical stability in their design yielded better results for the test-retest analysis with regard to both pain and global HRQoL scores (p < 0.05). The quality of the statistical analysis and its description were of great concern. Conclusion Test-retest reliability has been infrequently and poorly evaluated. The confirmation of clinical stability was an important factor in our analysis, and we suggest that special attention be focused on clinical stability when designing a PRO validation study that includes advanced cancer patients under PC. PMID:24447633
Development and validation of an HPLC–MS/MS method to determine clopidogrel in human plasma
Liu, Gangyi; Dong, Chunxia; Shen, Weiwei; Lu, Xiaopei; Zhang, Mengqi; Gui, Yuzhou; Zhou, Qinyi; Yu, Chen
2015-01-01
A quantitative method for clopidogrel using online-SPE tandem LC–MS/MS was developed and fully validated according to the well-established FDA guidelines. The method achieves adequate sensitivity for pharmacokinetic studies, with lower limit of quantifications (LLOQs) as low as 10 pg/mL. Chromatographic separations were performed on reversed phase columns Kromasil Eternity-2.5-C18-UHPLC for both methods. Positive electrospray ionization in multiple reaction monitoring (MRM) mode was employed for signal detection and a deuterated analogue (clopidogrel-d4) was used as internal standard (IS). Adjustments in sample preparation, including introduction of an online-SPE system proved to be the most effective method to solve the analyte back-conversion in clinical samples. Pooled clinical samples (two levels) were prepared and successfully used as real-sample quality control (QC) in the validation of back-conversion testing under different conditions. The result showed that the real samples were stable in room temperature for 24 h. Linearity, precision, extraction recovery, matrix effect on spiked QC samples and stability tests on both spiked QCs and real sample QCs stored in different conditions met the acceptance criteria. This online-SPE method was successfully applied to a bioequivalence study of 75 mg single dose clopidogrel tablets in 48 healthy male subjects. PMID:26904399
Abu El-Enin, Mohammed Abu Bakr; Al-Ghaffar Hammouda, Mohammed El-Sayed Abd; El-Sherbiny, Dina Tawfik; El-Wasseef, Dalia Rashad; El-Ashry, Saadia Mahmoud
2016-02-01
A valid, sensitive and rapid spectrofluorimetric method has been developed and validated for determination of both tadalafil (TAD) and vardenafil (VAR) either in their pure form, in their tablet dosage forms or spiked in human plasma. This method is based on measurement of the native fluorescence of both drugs in acetonitrile at λem 330 and 470 nm after excitation at 280 and 275 nm for tadalafil and vardenafil, respectively. Linear relationships were obtained over the concentration range 4-40 and 10-250 ng/mL with a minimum detection of 1 and 3 ng/mL for tadalafil and vardenafil, respectively. Various experimental parameters affecting the fluorescence intensity were carefully studied and optimized. The developed method was applied successfully for the determination of tadalafil and vardenafil in bulk drugs and tablet dosage forms. Moreover, the high sensitivity of the proposed method permitted their determination in spiked human plasma. The developed method was validated in terms of specificity, linearity, lower limit of quantification (LOQ), lower limit of detection (LOD), precision and accuracy. The mean recoveries of the analytes in pharmaceutical preparations were in agreement with those obtained from the comparison methods, as revealed by statistical analysis of the obtained results using Student's t-test and the variance ratio F-test. Copyright © 2015 John Wiley & Sons, Ltd.
Clinical Validation of Heart Rate Apps: Mixed-Methods Evaluation Study
Stans, Jelle; Mortelmans, Christophe; Van Haelst, Ruth; Van Schelvergem, Gertjan; Pelckmans, Caroline; Smeets, Christophe JP; Lanssens, Dorien; De Cannière, Hélène; Storms, Valerie; Thijs, Inge M; Vaes, Bert; Vandervoort, Pieter M
2017-01-01
Background Photoplethysmography (PPG) is a proven way to measure heart rate (HR). This technology is already available in smartphones, which allows measuring HR only by using the smartphone. Given the widespread availability of smartphones, this creates a scalable way to enable mobile HR monitoring. An essential precondition is that these technologies are as reliable and accurate as the current clinical (gold) standards. At this moment, there is no consensus on a gold standard method for the validation of HR apps. This results in different validation processes that do not always reflect the veracious outcome of comparison. Objective The aim of this paper was to investigate and describe the necessary elements in validating and comparing HR apps versus standard technology. Methods The FibriCheck (Qompium) app was used in two separate prospective nonrandomized studies. In the first study, the HR of the FibriCheck app was consecutively compared with 2 different Food and Drug Administration (FDA)-cleared HR devices: the Nonin oximeter and the AliveCor Mobile ECG. In the second study, a next step in validation was performed by comparing the beat-to-beat intervals of the FibriCheck app to a synchronized ECG recording. Results In the first study, the HR (BPM, beats per minute) of 88 random subjects consecutively measured with the 3 devices showed a correlation coefficient of .834 between FibriCheck and Nonin, .88 between FibriCheck and AliveCor, and .897 between Nonin and AliveCor. A single way analysis of variance (ANOVA; P=.61 was executed to test the hypothesis that there were no significant differences between the HRs as measured by the 3 devices. In the second study, 20,298 (ms) R-R intervals (RRI)–peak-to-peak intervals (PPI) from 229 subjects were analyzed. This resulted in a positive correlation (rs=.993, root mean square deviation [RMSE]=23.04 ms, and normalized root mean square error [NRMSE]=0.012) between the PPI from FibriCheck and the RRI from the wearable ECG. There was no significant difference (P=.92) between these intervals. Conclusions Our findings suggest that the most suitable method for the validation of an HR app is a simultaneous measurement of the HR by the smartphone app and an ECG system, compared on the basis of beat-to-beat analysis. This approach could lead to more correct assessments of the accuracy of HR apps. PMID:28842392
Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro
2014-05-01
Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Johansen, Ilona; Andreassen, Rune
2014-12-23
MicroRNAs (miRNAs) are an abundant class of endogenous small RNA molecules that downregulate gene expression at the post-transcriptional level. They play important roles by regulating genes that control multiple biological processes, and recent years there has been an increased interest in studying miRNA genes and miRNA gene expression. The most common method applied to study gene expression of single genes is quantitative PCR (qPCR). However, before expression of mature miRNAs can be studied robust qPCR methods (miRNA-qPCR) must be developed. This includes identification and validation of suitable reference genes. We are particularly interested in Atlantic salmon (Salmo salar). This is an economically important aquaculture species, but no reference genes dedicated for use in miRNA-qPCR methods has been validated for this species. Our aim was, therefore, to identify suitable reference genes for miRNA-qPCR methods in Salmo salar. We used a systematic approach where we utilized similar studies in other species, some biological criteria, results from deep sequencing of small RNAs and, finally, experimental validation of candidate reference genes by qPCR to identify the most suitable reference genes. Ssa-miR-25-3p was identified as most suitable single reference gene. The best combinations of two reference genes were ssa-miR-25-3p and ssa-miR-455-5p. These two genes were constitutively and stably expressed across many different tissues. Furthermore, infectious salmon anaemia did not seem to affect their expression levels. These genes were amplified with high specificity, good efficiency and the qPCR assays showed a good linearity when applying a simple cybergreen miRNA-PCR method using miRNA gene specific forward primers. We have identified suitable reference genes for miRNA-qPCR in Atlantic salmon. These results will greatly facilitate further studies on miRNA genes in this species. The reference genes identified are conserved genes that are identical in their mature sequence in many aquaculture species. Therefore, they may also be suitable as reference genes in other teleosts. Finally, the systematic approach used in our study successfully identified suitable reference genes, suggesting that this may be a useful strategy to apply in similar validation studies in other aquaculture species.
S, Vijay Kumar; Dhiman, Vinay; Giri, Kalpesh Kumar; Sharma, Kuldeep; Zainuddin, Mohd; Mullangi, Ramesh
2015-09-01
A novel, simple, specific, sensitive and reproducible high-performance liquid chromatography (HPLC) assay method has been developed and validated for the estimation of tofacitinib in rat plasma. The bioanalytical procedure involves extraction of tofacitinib and itraconazole (internal standard, IS) from rat plasma with a simple liquid-liquid extraction process. The chromatographic analysis was performed on a Waters Alliance system using a gradient mobile phase conditions at a flow rate of 1.0 mL/min and C18 column maintained at 40 ± 1 °C. The eluate was monitored using an UV detector set at 287 nm. Tofacitinib and IS eluted at 6.5 and 8.3 min, respectively and the total run time was 10 min. Method validation was performed as per US Food and Drug Administration guidelines and the results met the acceptance criteria. The calibration curve was linear over a concentration range of 182-5035 ng/mL (r(2) = 0.995). The intra- and inter-day precisions were in the range of 1.41-11.2 and 3.66-8.81%, respectively, in rat plasma. The validated HPLC method was successfully applied to a pharmacokinetic study in rats. Copyright © 2015 John Wiley & Sons, Ltd.
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Heath, A L; Skeaff, C M; Gibson, R S
1999-04-01
The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.
Consortium on Methods Evaluating Tobacco: Research Tools to Inform FDA Regulation of Snus.
Berman, Micah L; Bickel, Warren K; Harris, Andrew C; LeSage, Mark G; O'Connor, Richard J; Stepanov, Irina; Shields, Peter G; Hatsukami, Dorothy K
2017-10-04
The U.S. Food and Drug Administration (FDA) has purview over tobacco products. To set policy, the FDA must rely on sound science, yet most existing tobacco research methods have not been designed to specifically inform regulation. The NCI and FDA-funded Consortium on Methods Evaluating Tobacco (COMET) was established to develop and assess valid and reliable methods for tobacco product evaluation. The goal of this paper is to describe these assessment methods using a U.S. manufactured "snus" as the test product. In designing studies that could inform FDA regulation, COMET has taken a multidisciplinary approach that includes experimental animal models and a range of human studies that examine tobacco product appeal, addictiveness, and toxicity. This paper integrates COMET's findings over the last 4 years. Consistency in results was observed across the various studies, lending validity to our methods. Studies showed low abuse liability for snus and low levels of consumer demand. Toxicity was less than cigarettes on some biomarkers but higher than medicinal nicotine. Using our study methods and the convergence of results, the snus that we tested as a potential modified risk tobacco product is likely to neither result in substantial public health harm nor benefit. This review describes methods that were used to assess the appeal, abuse liability, and toxicity of snus. These methods included animal, behavioral economics, and consumer perception studies, and clinical trials. Across these varied methods, study results showed low abuse-liability and appeal of the snus product we tested. In several studies, demand for snus was lower than for less toxic nicotine gum. The consistency and convergence of results across a range of multi-disciplinary studies lends validity to our methods and suggests that promotion of snus as a modified risk tobacco products is unlikely to produce substantial public health benefit or harm. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Ibrahim, F; Wahba, M E K; Magdy, G
2018-01-05
In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315nm and 403nm with linearity ranges of 5.0-140.0 and 10.0-100.0μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λ em 488nm or 491nm after excitation at λ ex 410nm with linearity ranges of 0.20-10.0 and 0.20-36.0μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λ em 335nm following excitation at λ ex 255nm with linearity range of 0.05 to 1.6μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ibrahim, F.; Wahba, M. E. K.; Magdy, G.
2018-01-01
In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315 nm and 403 nm with linearity ranges of 5.0-140.0 and 10.0-100.0 μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λem 488 nm or 491 nm after excitation at λex 410 nm with linearity ranges of 0.20-10.0 and 0.20-36.0 μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λem 335 nm following excitation at λex 255 nm with linearity range of 0.05 to 1.6 μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods.
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
Cristale, Joyce; Lacorte, Silvia
2013-08-30
This study presents a multiresidue method for simultaneous extraction, clean-up and analysis of priority and emerging flame retardants in sediment, sewage sludge and dust. Studied compounds included eight polybrominated diphenyl ethers congeners, nine new brominated flame retardants and ten organophosphorus flame retardants. The analytical method was based on ultrasound-assisted extraction with ethyl acetate/cyclohexane (5:2, v/v), clean-up with Florisil cartridges and analysis by gas chromatography coupled to tandem mass spectrometry (GC-EI-MS/MS). Method development and validation protocol included spiked samples, certified reference material (for dust), and participation in an interlaboratory calibration. The method proved to be efficient and robust for extraction and determination of three families of flame retardants families in the studied solid matrices. The method was applied to river sediment, sewage sludge and dust samples, and allowed detection of 24 among the 27 studied flame retardants. Organophosphate esters, BDE-209 and decabromodiphenyl ethane were the most ubiquitous contaminants detected. Copyright © 2013 Elsevier B.V. All rights reserved.
Grindem, Hege; Eitzen, Ingrid; Snyder-Mackler, Lynn; Risberg, May Arna
2013-01-01
Background Current methods measuring sports activity after anterior cruciate ligament (ACL) injury are commonly restricted to the most knee-demanding sport, and do not consider participation in multiple sports. We therefore developed an online activity survey to prospectively record monthly participation in all major sports relevant to our patient-group. Objective To assess the reliability, content validity, and concurrent validity of the survey, and evaluate if it provided more complete data on sports participation than a routine activity questionnaire. Methods One hundred and forty-five consecutively included ACL-injured patients were eligible for the reliability study. The retest of the online activity survey was performed two days after the test response had been recorded. A subsample of 88 ACL-reconstructed patients were included in the validity study. The ACL-reconstructed patients completed the online activity survey from the first to the twelfth postoperative month, and a routine activity questionnaire 6 and 12 months postoperatively. Results The online activity survey was highly reliable (κ ranging from 0.81 to 1). It contained all the common sports reported on the routine activity questionnaire. There was substantial agreement between the two methods on return to preinjury main sport (κ = 0.71 and 0.74 at 6 and 12 months postoperatively). The online activity survey revealed that a significantly higher number of patients reported to participate in running, cycling and strength training, and patients reported to participate in a greater number of sports. Conclusion The online activity survey is a highly reliable way of recording detailed changes in sports participation after ACL injury. The findings of this study support the content and concurrent validity of the survey, and suggest that the online activity survey can provide more complete data on sports participation than a routine activity questionnaire. PMID:23645830
Harizanova, Stanislava N; Mateva, Nonka G; Tarnovska, Tanya Ch
2016-12-01
Burnout syndrome is a phenomenon that seems to be studied globally in relation to all types of populations. The staff in the system of correctional institutions in Bulgaria, however, is oddly left out of this tendency. There is no standardized model in Bulgaria that can be used to detect possible susceptibility to professional burnout. The methods available at present only register the irreversible changes that have already set in the functioning of the individual. V. Boyko's method for burnout assessment allows clinicians to use individual approach to patients and affords easy comparability of results with data from other psychodiagnostic instruments. Adaptation of the assessment instruments to fit the specificities of a study population (linguistic, ethno-cultural, etc.) is obligatory so that the instrument could be correctly used and yield valid results. Validation is one of the most frequently used technique to achieve this. The aim of the present study was to adapt and validate V. Boyko's burnout inventory for diagnosing burnout and assessment of the severity of the burnout syndrome in correctional officers. We conducted a pilot study with 50 officers working in the Plovdiv Regional Correction Facility by test-retest survey performed at an interval of 2 to 4 months. All participants completed the adapted questionnaire translated into Bulgarian voluntarily and anonymously. Statistical analysis was performed using SPSS v.17. We found a mild-to-strong statistically significant correlation (P<0.01) across all subscales between the most frequently used questionnaire for assessing the burnout syndrome, the Maslach Burnout Inventory, and the tool we propose here. The high Cronbach's α coefficient (α=0.94) and Spearman-Brown coefficient (rsb=0.86), and the low mean between-item correlation (r=0.30) demonstrated the instrument's good reliability and validity. With the validation herein presented we offer a highly reliable Bulgarian variant of Boyko's method for burnout assessment and research.
Makris, Susan L.; Raffaele, Kathleen; Allen, Sandra; Bowers, Wayne J.; Hass, Ulla; Alleva, Enrico; Calamandrei, Gemma; Sheets, Larry; Amcoff, Patric; Delrue, Nathalie; Crofton, Kevin M.
2009-01-01
Objective We conducted a review of the history and performance of developmental neurotoxicity (DNT) testing in support of the finalization and implementation of Organisation of Economic Co-operation and Development (OECD) DNT test guideline 426 (TG 426). Information sources and analysis In this review we summarize extensive scientific efforts that form the foundation for this testing paradigm, including basic neurotoxicology research, interlaboratory collaborative studies, expert workshops, and validation studies, and we address the relevance, applicability, and use of the DNT study in risk assessment. Conclusions The OECD DNT guideline represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points. The test methods used have been subjected to an extensive history of international validation, peer review, and evaluation, which is contained in the public record. The reproducibility, reliability, and sensitivity of these methods have been demonstrated, using a wide variety of test substances, in accordance with OECD guidance on the validation and international acceptance of new or updated test methods for hazard characterization. Multiple independent, expert scientific peer reviews affirm these conclusions. PMID:19165382
Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C
2014-01-01
Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.
Scott, Tannath J; Black, Cameron R; Quinn, John; Coutts, Aaron J
2013-01-01
The purpose of this study was to examine and compare the criterion validity and test-retest reliability of the CR10 and CR100 rating of perceived exertion (RPE) scales for team sport athletes that undertake high-intensity, intermittent exercise. Twenty-one male Australian football (AF) players (age: 19.0 ± 1.8 years, body mass: 83.92 ± 7.88 kg) participated the first part (part A) of this study, which examined the construct validity of the session-RPE (sRPE) method for quantifying training load in AF. Ten male athletes (age: 16.1 ± 0.5 years) participated in the second part of the study (part B), which compared the test-retest reliability of the CR10 and CR100 RPE scales. In part A, the validity of the sRPE method was assessed by examining the relationships between sRPE, and objective measures of internal (i.e., heart rate) and external training load (i.e., distance traveled), collected from AF training sessions. Part B of the study assessed the reliability of sRPE through examining the test-retest reliability of sRPE during 3 different intensities of controlled intermittent running (10, 11.5, and 13 km·h(-1)). Results from part A demonstrated strong correlations for CR10- and CR100-derived sRPE with measures of internal training load (Banisters TRIMP and Edwards TRIMP) (CR10: r = 0.83 and 0.83, and CR100: r = 0.80 and 0.81, p < 0.05). Correlations between sRPE and external training load (distance, higher speed running and player load) for both the CR10 (r = 0.81, 0.71, and 0.83) and CR100 (r = 0.78, 0.69, and 0.80) were significant (p < 0.05). Results from part B demonstrated poor reliability for both the CR10 (31.9% CV) and CR100 (38.6% CV) RPE scales after short bouts of intermittent running. Collectively, these results suggest both CR10- and CR100-derived sRPE methods have good construct validity for assessing training load in AF. The poor levels of reliability revealed under field testing indicate that the sRPE method may not be sensible to detecting small changes in exercise intensity during brief intermittent running bouts. Despite this limitation, the sRPE remains a valid method to quantify training loads in high-intensity, intermittent team sport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekechukwu, A.
This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less
Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.
Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María
2017-01-01
This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.
Degrees of separation as a statistical tool for evaluating candidate genes.
Nelson, Ronald M; Pettersson, Mats E
2014-12-01
Selection of candidate genes is an important step in the exploration of complex genetic architecture. The number of gene networks available is increasing and these can provide information to help with candidate gene selection. It is currently common to use the degree of connectedness in gene networks as validation in Genome Wide Association (GWA) and Quantitative Trait Locus (QTL) mapping studies. However, it can cause misleading results if not validated properly. Here we present a method and tool for validating the gene pairs from GWA studies given the context of the network they co-occur in. It ensures that proposed interactions and gene associations are not statistical artefacts inherent to the specific gene network architecture. The CandidateBacon package provides an easy and efficient method to calculate the average degree of separation (DoS) between pairs of genes to currently available gene networks. We show how these empirical estimates of average connectedness are used to validate candidate gene pairs. Validation of interacting genes by comparing their connectedness with the average connectedness in the gene network will provide support for said interactions by utilising the growing amount of gene network information available. Copyright © 2014 Elsevier Ltd. All rights reserved.
Solar-Diesel Hybrid Power System Optimization and Experimental Validation
NASA Astrophysics Data System (ADS)
Jacobus, Headley Stewart
As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.