Sample records for validation methods group

  1. Validity of body composition methods across ethnic population groups.

    PubMed

    Deurenberg, P; Deurenberg-Yap, M

    2003-10-01

    Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.

  2. Validation Methods for Fault-Tolerant avionics and control systems, working group meeting 1

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The proceedings of the first working group meeting on validation methods for fault tolerant computer design are presented. The state of the art in fault tolerant computer validation was examined in order to provide a framework for future discussions concerning research issues for the validation of fault tolerant avionics and flight control systems. The development of positions concerning critical aspects of the validation process are given.

  3. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  4. Using Focus Groups to Validate a Pharmacy Vaccination Training Program.

    PubMed

    Bushell, Mary; Morrissey, Hana; Ball, Patrick

    2015-06-12

    Introduction: Focus group methodology is commonly used to quickly collate, integrated views from a variety of different stakeholders. This paper provides an example of how focus groups can be employed to collate expert opinion informing amendments on a newly developed training program for integration into undergraduate pharmacy curricula. Materials and methods: Four focus groups were conducted, across three continents, to determine the appropriateness and reliability of a developed vaccination training program with nested injection skills training. All focus groups were comprised of legitimate experts in the field of vaccination, medicine and/or pharmacy. Results: Themes that emerged across focus groups informed amendments giving rise to a validated version of a training program. Discussion : The rigorous validation of the vaccination training program offers generalizable lessons to inform the design and validation of future training programs intended for the health sector and or pharmacy curricula. Using the knowledge and experience of focus group participants fostered collaborative problem solving and validation of material and concept development. The group dynamics of a focus group allowed synthesis of feedback in an inter-professional manner. Conclusions : This paper provides a demonstration of how focus groups can be structured and used by health researchers to validate a newly developed training program.

  5. Development and validation of an automated, microscopy-based method for enumeration of groups of intestinal bacteria.

    PubMed

    Jansen, G J; Wildeboer-Veloo, A C; Tonk, R H; Franks, A H; Welling, G W

    1999-09-01

    An automated microscopy-based method using fluorescently labelled 16S rRNA-targeted oligonucleotide probes directed against the predominant groups of intestinal bacteria was developed and validated. The method makes use of the Leica 600HR image analysis system, a Kodak MegaPlus camera model 1.4 and a servo-controlled Leica DM/RXA ultra-violet microscope. Software for automated image acquisition and analysis was developed and tested. The performance of the method was validated using a set of four fluorescent oligonucleotide probes: a universal probe for the detection of all bacterial species, one probe specific for Bifidobacterium spp., a digenus-probe specific for Bacteroides spp. and Prevotella spp. and a trigenus-probe specific for Ruminococcus spp., Clostridium spp. and Eubacterium spp. A nucleic acid stain, 4',6-diamidino-2-phenylindole (DAPI), was also included in the validation. In order to quantify the assay-error, one faecal sample was measured 20 times using each separate probe. Thereafter faecal samples of 20 different volunteers were measured following the same procedure in order to quantify the error due to individual-related differences in gut flora composition. It was concluded that the combination of automated microscopy and fluorescent whole-cell hybridisation enables distinction in gut flora-composition between volunteers at a significant level. With this method it is possible to process 48 faecal samples overnight, with coefficients of variation ranging from 0.07 to 0.30.

  6. Validation of a multi-residue method for the determination of several antibiotic groups in honey by LC-MS/MS.

    PubMed

    Bohm, Detlef A; Stachel, Carolin S; Gowik, Petra

    2012-07-01

    The presented multi-method was developed for the confirmation of 37 antibiotic substances from the six antibiotic groups: macrolides, lincosamides, quinolones, tetracyclines, pleuromutilines and diamino-pyrimidine derivatives. All substances were analysed simultaneously in a single analytical run with the same procedure, including an extraction with buffer, a clean-up by solid-phase extraction, and the measurement by liquid chromatography tandem mass spectrometry in ESI+ mode. The method was validated on the basis of an in-house validation concept with factorial design by combination of seven factors to check the robustness in a concentration range of 5-50 μg kg(-1). The honeys used were of different types with regard to colour and origin. The values calculated for the validation parameters-decision limit CCα (range, 7.5-12.9 μg kg(-1)), detection capability CCβ (range, 9.4-19.9 μg kg(-1)), within-laboratory reproducibility RSD(wR) (<20% except for tulathromycin with 23.5% and tylvalosin with 21.4 %), repeatability RSD(r) (<20% except for tylvalosin with 21.1%), and recovery (range, 92-106%)-were acceptable and in agreement with the criteria of Commission Decision 2002/657/EC. The validation results showed that the method was applicable for the residue analysis of antibiotics in honey to substances with and without recommended concentrations, although some changes had been tested during validation to determine the robustness of the method.

  7. Validation of oppressed group behaviors in nursing.

    PubMed

    Matheson, Linda Kay; Bobay, Kathleen

    2007-01-01

    The possibility that nurses exhibit oppressed group behaviors was first broached by Roberts [Roberts, S. J. (1983). Oppressed group behavior: Implications for nursing. Advances in Nursing Science, 21-30] when Freire's model [Freire, P. (1970). Pedagogy of the oppressed. New York: Herder and Herder] was applied to nursing. Since then, scholarly discussion has focused on aspects of oppression in nursing, but little research toward validation of Freire's model has occurred. An extensive literature search in CINAHL was completed seeking exploration and validation of the oppressed group behavior model and its dimensions. The Educational Testing Services, PsychInfo, Health and Psychosocial Instruments, and Sociological Abstracts databases were searched for measurement tools created within the last 10 years. This literature review identified that a model of oppressed group behavior has not been developed and validated, and that oppressed group behaviors have been studied independent of each other; however, oppressed group behaviors may have implications for the current nursing shortage.

  8. Considerations Underlying the Use of Mixed Group Validation

    ERIC Educational Resources Information Center

    Jewsbury, Paul A.; Bowden, Stephen C.

    2013-01-01

    Mixed Group Validation (MGV) is an approach for estimating the diagnostic accuracy of tests. MGV is a promising alternative to the more commonly used Known Groups Validation (KGV) approach for estimating diagnostic accuracy. The advantage of MGV lies in the fact that the approach does not require a perfect external validity criterion or gold…

  9. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  10. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  11. Determination of free sulphydryl groups in wheat gluten under the influence of different time and temperature of incubation: method validation.

    PubMed

    Rakita, Slađana; Pojić, Milica; Tomić, Jelena; Torbica, Aleksandra

    2014-05-01

    The aim of the present study was to determine the characteristics of an analytical method for determination of free sulphydryl (SH) groups of wheat gluten performed with previous gluten incubation for variable times (45, 90 and 135min) at variable temperatures (30 and 37°C), in order to determine its fitness-for-purpose. It was observed that the increase in temperature and gluten incubation time caused the increase in the amount of free SH groups, with more dynamic changes at 37°C. The method characteristics identified as relevant were: linearity, limit of detection, limit of quantification, precision (repeatability and reproducibility) and measurement uncertainty, which were checked within the validation protocol, while the method performance was monitored by X- and R-control charts. Identified method characteristics demonstrated its acceptable fitness-for-purpose, when assay included previous gluten incubation at 30°C. Although the method repeatability at 37°C was acceptable, the corresponding reproducibility did not meet the performance criterion on the basis of HORRAT value (HORRAT<2). Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Validation of Diagnostic Groups Based on Health Care Utilization Data Should Adjust for Sampling Strategy.

    PubMed

    Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini

    2017-08-01

    Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.

  13. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  14. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  15. Validating a work group climate assessment tool for improving the performance of public health organizations

    PubMed Central

    Perry, Cary; LeMay, Nancy; Rodway, Greg; Tracy, Allison; Galer, Joan

    2005-01-01

    Background This article describes the validation of an instrument to measure work group climate in public health organizations in developing countries. The instrument, the Work Group Climate Assessment Tool (WCA), was applied in Brazil, Mozambique, and Guinea to assess the intermediate outcomes of a program to develop leadership for performance improvement. Data were collected from 305 individuals in 42 work groups, who completed a self-administered questionnaire. Methods The WCA was initially validated using Cronbach's alpha reliability coefficient and exploratory factor analysis. This article presents the results of a second validation study to refine the initial analyses to account for nested data, to provide item-level psychometrics, and to establish construct validity. Analyses included eigenvalue decomposition analysis, confirmatory factor analysis, and validity and reliability analyses. Results This study confirmed the validity and reliability of the WCA across work groups with different demographic characteristics (gender, education, management level, and geographical location). The study showed that there is agreement between the theoretical construct of work climate and the items in the WCA tool across different populations. The WCA captures a single perception of climate rather than individual sub-scales of clarity, support, and challenge. Conclusion The WCA is useful for comparing the climates of different work groups, tracking the changes in climate in a single work group over time, or examining differences among individuals' perceptions of their work group climate. Application of the WCA before and after a leadership development process can help work groups hold a discussion about current climate and select a target for improvement. The WCA provides work groups with a tool to take ownership of their own group climate through a process that is simple and objective and that protects individual confidentiality. PMID:16223447

  16. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  17. Group power through the lens of the 21st century and beyond: further validation of the Sieloff-King Assessment of Group Power within Organizations.

    PubMed

    Sieloff, Christina L; Bularzik, Anne M

    2011-11-01

    The purpose was to determine the content validity of a semantic revision of items on a reliable and valid instrument, the Sieloff-King Assessment of Group Power within Organizations (SKAGPO). Research participants expressed negative perceptions regarding the use of the concept of 'power' in SKAGPO items. The SKAGPO is the only instrument measuring a nursing group's power or outcome attainment. Using a survey method, the instrument and grading scale were sent to 12 expert judges. Six participants completed the grading scale. The Content Validity Index (CVI) for seven questions was at or above 83% agreement. Overall, the CVI for the eight revised questions was 93.75%. Subsequently, the instrument was renamed the Sieloff-King Assessment of Group Outcome Attainment within Organizations (SKAGOAO). The semantic revision demonstrated content validity for the revised SKAGOAO. When used by nursing groups to assess their level of outcome attainment, the instrument should continue to be psychometrically evaluated. A nursing group of any size can use the SKAGOAO to both assess the group's level of outcome attainment or empowerment and direct plans to further improve that level. © 2011 Blackwell Publishing Ltd.

  18. Reliability and validity of a Tutorial Group Effectiveness Instrument.

    PubMed

    Singaram, Veena S; Van Der Vleuten, Cees P M; Van Berkel, Henk; Dolmans, Diana H J M

    2010-01-01

    Tutorial group effectiveness is essential for the success of learning in problem-based learning (PBL). Less effective and dysfunctional groups compromise the quality of students learning in PBL. This article aims to report on the reliability and validity of an instrument aimed at measuring tutorial group effectiveness in PBL. The items within the instrument are clustered around motivational and cognitive factors based on Slavin's theoretical framework. A confirmatory factor analysis (CFA) was carried out to estimate the validity of the instrument. Furthermore, generalizability studies were conducted and alpha coefficients were computed to determine the reliability and homogeneity of each factor. The CFA indicated that a three-factor model comprising 19 items showed a good fit with the data. Alpha coefficients per factor were high. The findings of the generalizability studies indicated that at least 9-10 student responses are needed in order to obtain reliable data at the tutorial group level. The instrument validated in this study has the potential to provide faculty and students with diagnostic information and feedback about student behaviors that enhance and hinder tutorial group effectiveness.

  19. Psychological collectivism: a measurement validation and linkage to group member performance.

    PubMed

    Jackson, Christine L; Colquitt, Jason A; Wesson, Michael J; Zapata-Phelan, Cindy P

    2006-07-01

    The 3 studies presented here introduce a new measure of the individual-difference form of collectivism. Psychological collectivism is conceptualized as a multidimensional construct with the following 5 facets: preference for in-groups, reliance on in-groups, concern for in-groups, acceptance of in-group norms, and prioritization of in-group goals. Study 1 developed and tested the new measure in a sample of consultants. Study 2 cross-validated the measure using an alumni sample of a Southeastern university, assessing its convergent validity with other collectivism measures. Study 3 linked scores on the measure to 4 dimensions of group member performance (task performance, citizenship behavior, counterproductive behavior, and withdrawal behavior) in a computer software firm and assessed discriminant validity using the Big Five. The results of the studies support the construct validity of the measure and illustrate the potential value of collectivism as a predictor of group member performance. ((c) 2006 APA, all rights reserved).

  20. Validation of the group nuclear safety climate questionnaire.

    PubMed

    Navarro, M Felisa Latorre; Gracia Lerín, Francisco J; Tomás, Inés; Peiró Silla, José María

    2013-09-01

    Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  1. Modeling Group Differences in OLS and Orthogonal Regression: Implications for Differential Validity Studies

    ERIC Educational Resources Information Center

    Kane, Michael T.; Mroch, Andrew A.

    2010-01-01

    In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…

  2. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  3. Detecting Symptom Exaggeration in Combat Veterans Using the MMPI-2 Symptom Validity Scales: A Mixed Group Validation

    ERIC Educational Resources Information Center

    Tolin, David F.; Steenkamp, Maria M.; Marx, Brian P.; Litz, Brett T.

    2010-01-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the…

  4. Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation

    ERIC Educational Resources Information Center

    Van Duzer, Eric

    2012-01-01

    The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…

  5. Validated spectrofluorimetric method for the determination of clonazepam in pharmaceutical preparations.

    PubMed

    Ibrahim, Fawzia; El-Enany, Nahed; Shalan, Shereen; Elsharawy, Rasha

    2016-05-01

    A simple, highly sensitive and validated spectrofluorimetric method was applied in the determination of clonazepam (CLZ). The method is based on reduction of the nitro group of clonazepam with zinc/CaCl2, and the product is then reacted with 2-cyanoacetamide (2-CNA) in the presence of ammonia (25%) yielding a highly fluorescent product. The produced fluorophore exhibits strong fluorescence intensity at ʎ(em) = 383 nm after excitation at ʎ(ex) = 333 nm. The method was rectilinear over a concentration range of 0.1-0.5 ng/mL with a limit of detection (LOD) of 0.0057 ng/mL and a limit of quantification (LOQ) of 0.017 ng/mL. The method was fully validated and successfully applied to the determination of CLZ in its tablets with a mean percentage recovery of 100.10 ± 0.75%. Method validation according to ICH Guidelines was evaluated. Statistical analysis of the results obtained using the proposed method was successfully compared with those obtained using a reference method, and there was no significance difference between the two methods in terms of accuracy and precision. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Global Land Product Validation Protocols: An Initiative of the CEOS Working Group on Calibration and Validation to Evaluate Satellite-derived Essential Climate Variables

    NASA Astrophysics Data System (ADS)

    Guillevic, P. C.; Nickeson, J. E.; Roman, M. O.; camacho De Coca, F.; Wang, Z.; Schaepman-Strub, G.

    2016-12-01

    The Global Climate Observing System (GCOS) has specified the need to systematically produce and validate Essential Climate Variables (ECVs). The Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and in particular its subgroup on Land Product Validation (LPV) is playing a key coordination role leveraging the international expertise required to address actions related to the validation of global land ECVs. The primary objective of the LPV subgroup is to set standards for validation methods and reporting in order to provide traceable and reliable uncertainty estimates for scientists and stakeholders. The Subgroup is comprised of 9 focus areas that encompass 10 land surface variables. The activities of each focus area are coordinated by two international co-leads and currently include leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FAPAR), vegetation phenology, surface albedo, fire disturbance, snow cover, land cover and land use change, soil moisture, land surface temperature (LST) and emissivity. Recent additions to the focus areas include vegetation indices and biomass. The development of best practice validation protocols is a core activity of CEOS LPV with the objective to standardize the evaluation of land surface products. LPV has identified four validation levels corresponding to increasing spatial and temporal representativeness of reference samples used to perform validation. Best practice validation protocols (1) provide the definition of variables, ancillary information and uncertainty metrics, (2) describe available data sources and methods to establish reference validation datasets with SI traceability, and (3) describe evaluation methods and reporting. An overview on validation best practice components will be presented based on the LAI and LST protocol efforts to date.

  7. Detecting symptom exaggeration in combat veterans using the MMPI-2 symptom validity scales: a mixed group validation.

    PubMed

    Tolin, David F; Steenkamp, Maria M; Marx, Brian P; Litz, Brett T

    2010-12-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the application of these scales has been problematic when assessing combat veterans undergoing posttraumatic stress disorder (PTSD) examinations. Mixed group validation (MGV) was employed to determine the efficacy of MMPI-2 exaggeration scales in compensation-seeking (CS) and noncompensation-seeking (NCS) veterans. Unlike CGV, MGV allows for a mix of exaggerating and nonexaggerating individuals in each group, does not require that the exaggeration versus nonexaggerating status of any individual be known, and can be adjusted for different base-rate estimates. MMPI-2 responses of 377 male veterans were examined according to CS versus NCS status. MGV was calculated using 4 sets of base-rate estimates drawn from the literature. The validity scales generally performed well (adequate sensitivity, specificity, and efficiency) under most base-rate estimations, and most produced cutoff scores that showed adequate detection of symptom exaggeration, regardless of base-rate assumptions. These results support the use of MMPI-2 validity scales for PTSD evaluations in veteran populations, even under varying base rates of symptom exaggeration.

  8. Cross-validation of the Beunen-Malina method to predict adult height.

    PubMed

    Beunen, Gaston P; Malina, Robert M; Freitas, Duarte I; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Lefevre, Johan

    2010-08-01

    The purpose of this study was to cross-validate the Beunen-Malina method for non-invasive prediction of adult height. Three hundred and eight boys aged 13, 14, 15 and 16 years from the Madeira Growth Study were observed at annual intervals in 1996, 1997 and 1998 and re-measured 7-8 years later. Height, sitting height and the triceps and subscapular skinfolds were measured; skeletal age was assessed using the Tanner-Whitehouse 2 method. Adult height was measured and predicted using the Beunen-Malina method. Maturity groups were classified using relative skeletal age (skeletal age minus chronological age). Pearson correlations, mean differences and standard errors of estimate (SEE) were calculated. Age-specific correlations between predicted and measured adult height vary between 0.70 and 0.85, while age-specific SEE varies between 3.3 and 4.7 cm. The correlations and SEE are similar to those obtained in the development of the original Beunen-Malina method. The Beunen-Malina method is a valid method to predict adult height in adolescent boys and can be used in European populations or populations from European ancestry. Percentage of predicted adult height is a non-invasive valid method to assess biological maturity.

  9. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  10. Validation Methods Research for Fault-Tolerant Avionics and Control Systems: Working Group Meeting, 2

    NASA Technical Reports Server (NTRS)

    Gault, J. W. (Editor); Trivedi, K. S. (Editor); Clary, J. B. (Editor)

    1980-01-01

    The validation process comprises the activities required to insure the agreement of system realization with system specification. A preliminary validation methodology for fault tolerant systems documented. A general framework for a validation methodology is presented along with a set of specific tasks intended for the validation of two specimen system, SIFT and FTMP. Two major areas of research are identified. First, are those activities required to support the ongoing development of the validation process itself, and second, are those activities required to support the design, development, and understanding of fault tolerant systems.

  11. Study on ABO and RhD blood grouping: Comparison between conventional tile method and a new solid phase method (InTec Blood Grouping Test Kit).

    PubMed

    Yousuf, R; Abdul Ghani, S A; Abdul Khalid, N; Leong, C F

    2018-04-01

    'InTec Blood Grouping Test kit' using solid-phase technology is a new method which may be used at outdoor blood donation site or at bed side as an alternative to the conventional tile method in view of its stability at room temperature and fulfilled the criteria as point of care test. This study aimed to compare the efficiency of this solid phase method (InTec Blood Grouping Test Kit) with the conventional tile method in determining the ABO and RhD blood group of healthy donors. A total of 760 voluntary donors who attended the Blood Bank, Penang Hospital or offsite blood donation campaigns from April to May 2014 were recruited. The ABO and RhD blood groups were determined by the conventional tile method and the solid phase method, in which the tube method was used as the gold standard. For ABO blood grouping, the tile method has shown 100% concordance results with the gold standard tube method, whereas the solid-phase method only showed concordance result for 754/760 samples (99.2%). Therefore, for ABO grouping, tile method has 100% sensitivity and specificity while the solid phase method has slightly lower sensitivity of 97.7% but both with good specificity of 100%. For RhD grouping, both the tile and solid phase methods have grouped one RhD positive specimen as negative each, thus giving the sensitivity and specificity of 99.9% and 100% for both methods respectively. The 'InTec Blood Grouping Test Kit' is suitable for offsite usage because of its simplicity and user friendliness. However, further improvement in adding the internal quality control may increase the test sensitivity and validity of the test results.

  12. Update of Standard Practices for New Method Validation in Forensic Toxicology.

    PubMed

    Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T

    2017-01-01

    International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Proposal for risk-based scientific approach on full and partial validation for general changes in bioanalytical method.

    PubMed

    Mochizuki, Ayumi; Ieki, Katsunori; Kamimori, Hiroshi; Nagao, Akemi; Nakai, Keiko; Nakayama, Akira; Nanba, Eitaro

    2018-04-01

    The guidance and several guidelines on bioanalytical method validation, which were issued by the US FDA, EMA and Ministry of Health, Labour and Welfare, list the 'full' validation parameters; however, none of these provide any details for 'partial' validation. Japan Bioanalysis Forum approved a total of three annual discussion groups from 2012 to 2014. In the discussion groups, members from pharmaceutical companies and contract research organizations discussed the details of partial validation from a risk assessment viewpoint based on surveys focusing on bioanalysis of small molecules using LC-MS/MS in Japan. This manuscript presents perspectives and recommendations for most conceivable changes that can be made to full and partial validations by members of the discussion groups based on their experiences and discussions at the Japan Bioanalysis Forum Symposium.

  14. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  15. A multiple-group measurement scale for interprofessional collaboration: Adaptation and validation into Italian and German languages.

    PubMed

    Vittadello, Fabio; Mischo-Kelling, Maria; Wieser, Heike; Cavada, Luisa; Lochner, Lukas; Naletto, Carla; Fink, Verena; Reeves, Scott

    2018-05-01

    This article presents a study that aimed to validate a translation of a multiple-group measurement scale for interprofessional collaboration (IPC). We used survey data gathered over a three month period as part of a mixed methods study that explored the nature of IPC in Northern Italy. Following a translation from English into Italian and German the survey was distributed online to over 5,000 health professionals (dieticians, nurses, occupational therapists, physicians, physiotherapists, speech therapists and psychologists) based in one regional health trust. In total, 2,238 different health professions completed the survey. Based on the original scale, three principal components were extracted and confirmed as relevant factors for IPC (communication, accommodation and isolation). A confirmatory analysis (3-factor model) was applied to the data of physicians and nurses by language group. In conclusion, the validation of the German and Italian IPC scale has provided an instrument of acceptable reliability and validity for the assessment of IPC involving physicians and nurses.

  16. Group Cohesion DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    Group Cohesion DEOCS 4.1 Construct Validity Summary DEFENSE EQUAL OPPORTUNITY MANAGEMENT INSTITUTE DIRECTORATE...See Table 4 for more information regarding item reliabilities. The relationship between the original four-point scale (Organizational Cohesion) and...future analyses, including those using the seven-point scale. Tables 4 and 5 provide additional information regarding the reliability and descriptive

  17. Extension of the validation of AOAC Official Method 2005.06 for dc-GTX2,3: interlaboratory study.

    PubMed

    Ben-Gigirey, Begoña; Rodríguez-Velasco, María L; Gago-Martínez, Ana

    2012-01-01

    AOAC Official Method(SM) 2005.06 for the determination of saxitoxin (STX)-group toxins in shellfish by LC with fluorescence detection with precolumn oxidation was previously validated and adopted First Action following a collaborative study. However, the method was not validated for all key STX-group toxins, and procedures to quantify some of them were not provided. With more STX-group toxin standards commercially available and modifications to procedures, it was possible to overcome some of these difficulties. The European Union Reference Laboratory for Marine Biotoxins conducted an interlaboratory exercise to extend AOAC Official Method 2005.06 validation for dc-GTX2,3 and to compile precision data for several STX-group toxins. This paper reports the study design and the results obtained. The performance characteristics for dc-GTX2,3 (intralaboratory and interlaboratory precision, recovery, and theoretical quantification limit) were evaluated. The mean recoveries obtained for dc-GTX2,3 were, in general, low (53.1-58.6%). The RSD for reproducibility (RSD(r)%) for dc-GTX2,3 in all samples ranged from 28.2 to 45.7%, and HorRat values ranged from 1.5 to 2.8. The article also describes a hydrolysis protocol to convert GTX6 to NEO, which has been proven to be useful for the quantification of GTX6 while the GTX6 standard is not available. The performance of the participant laboratories in the application of this method was compared with that obtained from the original collaborative study of the method. Intralaboratory and interlaboratory precision data for several STX-group toxins, including dc-NEO and GTX6, are reported here. This study can be useful for those laboratories determining STX-group toxins to fully implement AOAC Official Method 2005.06 for official paralytic shellfish poisoning control. However the overall quantitative performance obtained with the method was poor for certain toxins.

  18. Sarma-based key-group method for rock slope reliability analyses

    NASA Astrophysics Data System (ADS)

    Yarahmadi Bafghi, A. R.; Verdel, T.

    2005-08-01

    The methods used in conducting static stability analyses have remained pertinent to this day for reasons of both simplicity and speed of execution. The most well-known of these methods for purposes of stability analysis of fractured rock masses is the key-block method (KBM).This paper proposes an extension to the KBM, called the key-group method (KGM), which combines not only individual key-blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. To take intra-group forces into account, the Sarma method has been implemented within the KGM in order to generate a Sarma-based KGM, abbreviated SKGM. We will discuss herein the hypothesis behind this new method, details regarding its implementation, and validation through comparison with results obtained from the distinct element method.Furthermore, as an alternative to deterministic methods, reliability analyses or probabilistic analyses have been proposed to take account of the uncertainty in analytical parameters and models. The FOSM and ASM probabilistic methods could be implemented within the KGM and SKGM framework in order to take account of the uncertainty due to physical and mechanical data (density, cohesion and angle of friction). We will then show how such reliability analyses can be introduced into SKGM to give rise to the probabilistic SKGM (PSKGM) and how it can be used for rock slope reliability analyses. Copyright

  19. OECD-NEA Expert Group on Multi-Physics Experimental Data, Benchmarks and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, Timothy; Rohatgi, Upendra S.

    High-fidelity, multi-physics modeling and simulation (M&S) tools are being developed and utilized for a variety of applications in nuclear science and technology and show great promise in their abilities to reproduce observed phenomena for many applications. Even with the increasing fidelity and sophistication of coupled multi-physics M&S tools, the underpinning models and data still need to be validated against experiments that may require a more complex array of validation data because of the great breadth of the time, energy and spatial domains of the physical phenomena that are being simulated. The Expert Group on Multi-Physics Experimental Data, Benchmarks and Validationmore » (MPEBV) of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) was formed to address the challenges with the validation of such tools. The work of the MPEBV expert group is shared among three task forces to fulfill its mandate and specific exercises are being developed to demonstrate validation principles for common industrial challenges. This paper describes the overall mission of the group, the specific objectives of the task forces, the linkages among the task forces, and the development of a validation exercise that focuses on a specific reactor challenge problem.« less

  20. Independent surgical validation of the new prostate cancer grade-grouping system.

    PubMed

    Spratt, Daniel E; Cole, Adam I; Palapattu, Ganesh S; Weizer, Alon Z; Jackson, William C; Montgomery, Jeffrey S; Dess, Robert T; Zhao, Shuang G; Lee, Jae Y; Wu, Angela; Kunju, Lakshmi P; Talmich, Emily; Miller, David C; Hollenbeck, Brent K; Tomlins, Scott A; Feng, Felix Y; Mehra, Rohit; Morgan, Todd M

    2016-11-01

    To report the independent prognostic impact of the new prostate cancer grade-grouping system in a large external validation cohort of patients treated with radical prostatectomy (RP). Between 1994 and 2013, 3 694 consecutive men were treated with RP at a single institution. To investigate the performance of and validate the grade-grouping system, biochemical recurrence-free survival (bRFS) rates were assessed using Kaplan-Meier tests, Cox-regression modelling, and discriminatory comparison analyses. Separate analyses were performed based on biopsy and RP grade. The median follow-up was 52.7 months. The 5-year actuarial bRFS for biopsy grade groups 1-5 were 94.2%, 89.2%, 73.1%, 63.1%, and 54.7%, respectively (P < 0.001). Similarly, the 5-year actuarial bRFS based on RP grade groups was 96.1%, 93.0%, 74.0%, 64.4%, and 49.9% for grade groups 1-5, respectively (P < 0.001). The adjusted hazard ratios for bRFS relative to biopsy grade group 1 were 1.98, 4.20, 5.57, and 9.32 for groups 2, 3, 4, and 5, respectively (P < 0.001), and for RP grade groups were 2.09, 5.27, 5.86, and 10.42 (P < 0.001). The five-grade-group system had a higher prognostic discrimination compared with the commonly used three-tier system (Gleason score 6 vs 7 vs 8-10). In an independent surgical cohort, we have validated the prognostic benefit of the new prostate cancer grade-grouping system for bRFS, and shown that the benefit is maintained after adjusting for important clinicopathological variables. The greater predictive accuracy of the new system will improve risk stratification in the clinical setting and aid in patient counselling. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  1. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.

  2. ASTM Validates Air Pollution Test Methods

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  3. [Selection of risk and diagnosis in diabetic polyneuropathy. Validation of method of new systems].

    PubMed

    Jurado, Jerónimo; Caula, Jacinto; Pou i Torelló, Josep Maria

    2006-06-30

    In a previous study we developed a specific algorithm, the polyneuropathy selection method (PSM) with 4 parameters (age, HDL-C, HbA1c, and retinopathy), to select patients at risk of diabetic polyneuropathy (DPN). We also developed a simplified method for DPN diagnosis: outpatient polyneuropathy diagnosis (OPD), with 4 variables (symptoms and 3 objective tests). To confirm the validity of conventional tests for DPN diagnosis; to validate the discriminatory power of the PSM and the diagnostic value of OPD by evaluating their relationship to electrodiagnosis studies and objective clinical neurological assessment; and to evaluate the correlation of DPN and pro-inflammatory status. Cross-sectional, crossed association for PSM validation. Paired samples for OPD validation. Primary care in 3 counties. Random sample of 75 subjects from the type-2 diabetes census for PSM evaluation. Thirty DPN patients and 30 non-DPN patients (from 2 DM2 sub-groups in our earlier study) for OPD evaluation. The gold standard for DPN diagnosis will be studied by means of a clinical neurological study (symptoms, physical examination, and sensitivity tests) and electrodiagnosis studies (sensitivity and motor EMG). Risks of neuropathy, macroangiopathy and pro-inflammatory status (PCR, TNF soluble fraction and total TGF-beta1) will be studied in every subject. Electrodiagnosis studies should confirm the validity of conventional tests for DPN diagnosis. PSM and OPD will be valid methods for selecting patients at risk and diagnosing DPN. There will be a significant relationship between DPN and pro-inflammatory tests.

  4. Validation of wet mount microscopy against Trichomonas culture among women of reproductive age group in Western province, Sri Lanka.

    PubMed

    Banneheke, H; Fernandopulle, R; Gunasekara, U; Barua, A; Fernando, N; Wickremasinghe, R

    2015-06-01

    Wet mount microscopy is the most commonly used diagnostic method for trichomoniasis in clinical diagnostic services all over the world including Sri Lanka due to its availability, simplicity and is relatively inexpensive. However, Trichomonas culture and PCR are the gold standard tests. Unfortunately, neither the culture nor PCR is available for the diagnosis of trichomoniasis in Sri Lanka. Thus, it is important to validate the wet mount microscopy as it is the only available diagnostic test and has not been validated to date in Sri Lanka. The objective was to evaluate the validity and reliability of wet mount microscopy against gold standard Trichomonas culture among clinic based population of reproductive age group women in Western province, Sri Lanka. Women attending hospital and institutional based clinics were enrolled. They were interviewed and high vaginal swabs were taken for laboratory diagnosis by culture and wet mount microscopy. There were 601 participants in the age group of 15-45 years. Wet mount microscopy showed 68% sensitivity, 100% specificity, 100% positive (PPV) and 98% negative predictive values (NPV) (P=0.001, kappa=0.803) respectively against the gold standard culture. The area under the ROC curve was 0.840. Sensitivity of wet mount microscopy is low. However it has high validity and reliability as a specific diagnostic test for trichomoniasis. If it is to be used among women of reproductive age group in Western province, Sri Lanka, a culture method could be adopted as a second test to confirm the negative wet mount for symptomatic patients.

  5. Toward a Unified Validation Framework in Mixed Methods Research

    ERIC Educational Resources Information Center

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  6. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study.

    PubMed

    Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W

    2015-01-01

    CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study

  7. Internal validity of an anxiety disorder screening instrument across five ethnic groups.

    PubMed

    Ritsher, Jennifer Boyd; Struening, Elmer L; Hellman, Fred; Guardino, Mary

    2002-08-30

    We tested the factor structure of the National Anxiety Disorder Screening Day instrument (n=14860) within five ethnic groups (White, Black, Hispanic, Asian, Native American). Conducted yearly across the US, the screening is meant to detect five common anxiety syndromes. Factor analyses often fail to confirm the validity of assessment tools' structures, and this is especially likely for minority ethnic groups. If symptoms cluster differently across ethnic groups, criteria for conventional DSM-IV disorders are less likely to be met, leaving significant distress unlabeled and under-detected in minority groups. Exploratory and confirmatory factor analyses established that the items clustered into the six expected factors (one for each disorder plus agoraphobia). This six-factor model fit the data very well for Whites and not significantly worse for each other group. However, small areas of the model did not appear to fit as well for some groups. After taking these areas into account, the data still clearly suggest more prevalent PTSD symptoms in the Black, Hispanic and Native American groups in our sample. Additional studies are warranted to examine the model's external validity, generalizability to more culturally distinct groups, and overlap with other culture-specific syndromes.

  8. Nonclinical dose formulation analysis method validation and sample analysis.

    PubMed

    Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D

    2010-12-01

    Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.

  9. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  10. 76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 63 [OAR-2004-0080, FRL-9306-8] RIN 2060-AF00 Method 301--Field Validation of Pollutant Measurement Methods From Various Waste Media AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: This action amends EPA's Method 301, Field Validation...

  11. A systematic review and appraisal of methods of developing and validating lifestyle cardiovascular disease risk factors questionnaires.

    PubMed

    Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah

    2015-09-01

    Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.

  12. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  13. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  14. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning.

    PubMed

    Morin, Ruth T; Axelrod, Bradley N

    Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.

  15. Validation of an automated colony counting system for group A Streptococcus.

    PubMed

    Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R

    2016-02-08

    The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth

  16. 78 FR 56718 - Draft Guidance for Industry on Bioanalytical Method Validation; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ...] Draft Guidance for Industry on Bioanalytical Method Validation; Availability AGENCY: Food and Drug... availability of a draft guidance for industry entitled ``Bioanalytical Method Validation.'' The draft guidance is intended to provide recommendations regarding analytical method development and validation for the...

  17. The Value of Qualitative Methods in Social Validity Research

    ERIC Educational Resources Information Center

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  18. Development and Validation of Videotaped Scenarios: A Method for Targeting Specific Participant Groups

    ERIC Educational Resources Information Center

    Noel, Nora E.; Maisto, Stephen A.; Johnson, James D.; Jackson, Lee A., Jr.; Goings, Christopher D.; Hagman, Brett T.

    2008-01-01

    Researchers using scenarios often neglect to validate perceived content and salience of embedded stimuli specifically with intended participants, even when such meaning is integral to the study. For example, sex and aggression stimuli are heavily influenced by culture, so participants may not perceive what researchers intended in sexual aggression…

  19. Development and Validation of the Guided Group Discussion Self-Estimate Inventory (GGD-SEI).

    ERIC Educational Resources Information Center

    Martin, David; Campbell, Bill

    1998-01-01

    A 19-item self-report measure was designed to promote increased self-awareness of a group leader's perceived ability to facilitate small group discussion. Results of analysis show high reliability and validity. The instrument, developed for use within education and training settings, provides a useful measure of guided small-group discussion…

  20. Validation of a semi-quantitative food frequency questionnaire to assess food groups and nutrient intake.

    PubMed

    Macedo-Ojeda, Gabriela; Vizmanos-Lamotte, Barbara; Márquez-Sandoval, Yolanda Fabiola; Rodríguez-Rocha, Norma Patricia; López-Uriarte, Patricia Josefina; Fernández-Ballart, Joan D

    2013-11-01

    Semi-quantitative Food Frequency Questionnaires (FFQs) analyze average food and nutrient intake over extended periods to associate habitual dietary intake with health problems and chronic diseases. A tool of this nature applicable to both women and men is not presently available in Mexico. To validate a FFQ for adult men and women. The study was conducted on 97 participants, 61% were women. Two FFQs were administered (with a one-year interval) to measure reproducibility. To assess validity, the second FFQ was compared against dietary record (DR) covering nine days. Statistical analyses included Pearson correlations and Intraclass Correlation Coefficients (ICC). The de-attenuation of the ICC resulting from intraindividual variability was controlled. The validity analysis was complemented by comparing the classification ability of FFQ to that of DR through concordance between intake categories and Bland-Altman plots. Reproducibility: ICC values for food groups ranged 0.42-0.87; the range for energy and nutrients was between 0.34 and 0.82. ICC values for food groups ranged 0.35-0.84; the range for energy and nutrients was between 0.36 and 0.77. Most subjects (56.7-76.3%) classified in the same or adjacent quintile for energy and nutrients using both methods. Extreme misclassification was <6.3% for all items. Bland-Altman plots reveal high concordance between FFQ and DR. FFQ produced sufficient levels of reproducibility and validity to determine average daily intake over one year. These results will enable the analysis of possible associations with chronic diseases and dietary diagnoses in adult populations of men and women. Copyright AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  1. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  2. Teaching method validation in the clinical laboratory science curriculum.

    PubMed

    Moon, Tara C; Legrys, Vicky A

    2008-01-01

    With the Clinical Laboratory Improvement Amendment's (CLIA) final rule, the ability of the Clinical Laboratory Scientist (CLS) to perform method validation has become increasingly important. Knowledge of the statistical methods and procedures used in method validation is imperative for clinical laboratory scientists. However, incorporating these concepts in a CLS curriculum can be challenging, especially at a time of limited resources. This paper provides an outline of one approach to addressing these topics in lecture courses and integrating them in the student laboratory and the clinical practicum for direct application.

  3. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  4. International Harmonization and Cooperation in the Validation of Alternative Methods.

    PubMed

    Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie

    The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.

  5. Developmental validation of a Nextera XT mitogenome Illumina MiSeq sequencing method for high-quality samples.

    PubMed

    Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla

    2018-05-01

    Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  7. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Processing and validation of JEFF-3.1.1 and ENDF/B-VII.0 group-wise cross section libraries for shielding calculations

    NASA Astrophysics Data System (ADS)

    Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.

    2013-03-01

    This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.

  9. Design and validation of a method for evaluation of interocular interaction.

    PubMed

    Lai, Xin Jie Angela; Alexander, Jack; Ho, Arthur; Yang, Zhikuan; He, Mingguang; Suttle, Catherine

    2012-02-01

    To design a simple viewing system allowing dichoptic masking, and to validate this system in adults and children with normal vision. A Trial Frame Apparatus (TFA) was designed to evaluate interocular interaction. This device consists of a trial frame, a 1 mm pinhole in front of the tested eye and a full or partial occluder in front of the non-tested eye. The difference in visual function in one eye between the full- and partial-occlusion conditions was termed the Interaction Index. In experiment 1, low-contrast acuity was measured in six adults using five types of partial occluder. Interaction Index was compared between these five, and the occluder showing the highest Index was used in experiment 2. In experiment 2, low-contrast acuity, contrast sensitivity, and alignment sensitivity were measured in the non-dominant eye of 45 subjects (15 older adults, 15 young adults, and 15 children), using the TFA and an existing well-validated device (shutter goggles) with full and partial occlusion of the dominant eye. These measurements were repeated on 11 subjects of each group using TFA in the partial-occlusion condition only. Repeatability of visual function measurements using TFA was assessed using the Bland-Altman method and agreement between TFA and goggles in terms of visual functions and interactions was assessed using the Bland-Altman method and t-test. In all three subject groups, the TFA showed a high level of repeatability in all visual function measurements. Contrast sensitivity was significantly poorer when measured using TFA than using goggles (p < 0.05). However, Interaction Index of all three visual functions showed acceptable agreement between TFA and goggles (p > 0.05). The TFA may provide an acceptable method for the study of some forms of dichoptic masking in populations where more complex devices (e.g., shutter goggles) cannot be used.

  10. Workshop Report: Crystal City VI-Bioanalytical Method Validation for Biomarkers.

    PubMed

    Arnold, Mark E; Booth, Brian; King, Lindsay; Ray, Chad

    2016-11-01

    With the growing focus on translational research and the use of biomarkers to drive drug development and approvals, biomarkers have become a significant area of research within the pharmaceutical industry. However, until the US Food and Drug Administration's (FDA) 2013 draft guidance on bioanalytical method validation included consideration of biomarker assays using LC-MS and LBA, those assays were created, validated, and used without standards of performance. This lack of expectations resulted in the FDA receiving data from assays of varying quality in support of efficacy and safety claims. The AAPS Crystal City VI (CC VI) Workshop in 2015 was held as the first forum for industry-FDA discussion around the general issues of biomarker measurements (e.g., endogenous levels) and specific technology strengths and weaknesses. The 2-day workshop served to develop a common understanding among the industrial scientific community of the issues around biomarkers, informed the FDA of the current state of the science, and will serve as a basis for further dialogue as experience with biomarkers expands with both groups.

  11. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  12. Field validation of the dnph method for aldehydes and ketones. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Workman, G.S.; Steger, J.L.

    1996-04-01

    A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less

  13. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Validate or falsify: Lessons learned from a microscopy method claimed to be useful for detecting Borrelia and Babesia organisms in human blood.

    PubMed

    Aase, Audun; Hajdusek, Ondrej; Øines, Øivind; Quarsten, Hanne; Wilhelmsson, Peter; Herstad, Tove K; Kjelland, Vivian; Sima, Radek; Jalovecka, Marie; Lindgren, Per-Eric; Aaberge, Ingeborg S

    2016-01-01

    A modified microscopy protocol (the LM-method) was used to demonstrate what was interpreted as Borrelia spirochetes and later also Babesia sp., in peripheral blood from patients. The method gained much publicity, but was not validated prior to publication, which became the purpose of this study using appropriate scientific methodology, including a control group. Blood from 21 patients previously interpreted as positive for Borrelia and/or Babesia infection by the LM-method and 41 healthy controls without known history of tick bite were collected, blinded and analysed for these pathogens by microscopy in two laboratories by the LM-method and conventional method, respectively, by PCR methods in five laboratories and by serology in one laboratory. Microscopy by the LM-method identified structures claimed to be Borrelia- and/or Babesia in 66% of the blood samples of the patient group and in 85% in the healthy control group. Microscopy by the conventional method for Babesia only did not identify Babesia in any samples. PCR analysis detected Borrelia DNA in one sample of the patient group and in eight samples of the control group; whereas Babesia DNA was not detected in any of the blood samples using molecular methods. The structures interpreted as Borrelia and Babesia by the LM-method could not be verified by PCR. The method was, thus, falsified. This study underlines the importance of doing proper test validation before new or modified assays are introduced.

  16. A collaborative design method to support integrated care. An ICT development method containing continuous user validation improves the entire care process and the individual work situation

    PubMed Central

    Scandurra, Isabella; Hägglund, Maria

    2009-01-01

    Introduction Integrated care involves different professionals, belonging to different care provider organizations and requires immediate and ubiquitous access to patient-oriented information, supporting an integrated view on the care process [1]. Purpose To present a method for development of usable and work process-oriented information and communication technology (ICT) systems for integrated care. Theory and method Based on Human-computer Interaction Science and in particular Participatory Design [2], we present a new collaborative design method in the context of health information systems (HIS) development [3]. This method implies a thorough analysis of the entire interdisciplinary cooperative work and a transformation of the results into technical specifications, via user validated scenarios, prototypes and use cases, ultimately leading to the development of appropriate ICT for the variety of occurring work situations for different user groups, or professions, in integrated care. Results and conclusions Application of the method in homecare of the elderly resulted in an HIS that was well adapted to the intended user groups. Conducted in multi-disciplinary seminars, the method captured and validated user needs and system requirements for different professionals, work situations, and environments not only for current work; it also aimed to improve collaboration in future (ICT supported) work processes. A holistic view of the entire care process was obtained and supported through different views of the HIS for different user groups, resulting in improved work in the entire care process as well as for each collaborating profession [4].

  17. Concurrent validity of different functional and neuroproteomic pain assessment methods in the rat osteoarthritis monosodium iodoacetate (MIA) model.

    PubMed

    Otis, Colombe; Gervais, Julie; Guillot, Martin; Gervais, Julie-Anne; Gauvin, Dominique; Péthel, Catherine; Authier, Simon; Dansereau, Marc-André; Sarret, Philippe; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; Beaudry, Francis; Troncy, Eric

    2016-06-23

    Lack of validity in osteoarthritis pain models and assessment methods is suspected. Our goal was to 1) assess the repeatability and reproducibility of measurement and the influence of environment, and acclimatization, to different pain assessment outcomes in normal rats, and 2) test the concurrent validity of the most reliable methods in relation to the expression of different spinal neuropeptides in a chemical model of osteoarthritic pain. Repeatability and inter-rater reliability of reflexive nociceptive mechanical thresholds, spontaneous static weight-bearing, treadmill, rotarod, and operant place escape/avoidance paradigm (PEAP) were assessed by the intraclass correlation coefficient (ICC). The most reliable acclimatization protocol was determined by comparing coefficients of variation. In a pilot comparative study, the sensitivity and responsiveness to treatment of the most reliable methods were tested in the monosodium iodoacetate (MIA) model over 21 days. Two MIA (2 mg) groups (including one lidocaine treatment group) and one sham group (0.9 % saline) received an intra-articular (50 μL) injection. No effect of environment (observer, inverted circadian cycle, or exercise) was observed; all tested methods except mechanical sensitivity (ICC <0.3), offered good repeatability (ICC ≥0.7). The most reliable acclimatization protocol included five assessments over two weeks. MIA-related osteoarthritic change in pain was demonstrated with static weight-bearing, punctate tactile allodynia evaluation, treadmill exercise and operant PEAP, the latter being the most responsive to analgesic intra-articular lidocaine. Substance P and calcitonin gene-related peptide were higher in MIA groups compared to naive (adjusted P (adj-P) = 0.016) or sham-treated (adj-P = 0.029) rats. Repeated post-MIA lidocaine injection resulted in 34 times lower downregulation for spinal substance P compared to MIA alone (adj-P = 0.029), with a concomitant increase of 17 % in

  18. Moving beyond Traditional Methods of Survey Validation

    ERIC Educational Resources Information Center

    Maul, Andrew

    2017-01-01

    In his focus article, "Rethinking Traditional Methods of Survey Validation," published in this issue of "Measurement: Interdisciplinary Research and Perspectives," Andrew Maul wrote that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as…

  19. Reproducibility and relative validity of food group intake in a food frequency questionnaire developed for Nepalese diet.

    PubMed

    Shrestha, Archana; Koju, Rajendra Prasad; Beresford, Shirley A A; Chan, Kwun Chuen Gary; Connell, Frederik A; Karmacharya, Biraj Man; Shrestha, Pramita; Fitzpatrick, Annette L

    2017-08-01

    We developed a food frequency questionnaire (FFQ) designed to measure the dietary practices of adult Nepalese. The present study examined the validity and reproducibility of the FFQ. To evaluate the reproducibility of the FFQ, 116 subjects completed two 115-item FFQ across a four-month interval. Six 24-h dietary recalls were collected (1 each month) to assess the validity of the FFQ. Seven major food groups and 23 subgroups were clustered from the FFQ based on macronutrient composition. Spearman correlation coefficients evaluating reproducibility for all food groups were greater than 0.5, with the exceptions of oil. The correlations varied from 0.41 (oil) to 0.81 (vegetables). All crude spearman coefficients for validity were greater than 0.5 except for dairy products, pizzas/pastas and sausage/burgers. The FFQ was found to be reliable and valid for ranking the intake of food groups for Nepalese dietary intake.

  20. Validation Process Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John E.; English, Christine M.; Gesick, Joshua C.

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  1. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  2. Forward ultrasonic model validation using wavefield imaging methods

    NASA Astrophysics Data System (ADS)

    Blackshire, James L.

    2018-04-01

    The validation of forward ultrasonic wave propagation models in a complex titanium polycrystalline material system is accomplished using wavefield imaging methods. An innovative measurement approach is described that permits the visualization and quantitative evaluation of bulk elastic wave propagation and scattering behaviors in the titanium material for a typical focused immersion ultrasound measurement process. Results are provided for the determination and direct comparison of the ultrasonic beam's focal properties, mode-converted shear wave position and angle, and scattering and reflection from millimeter-sized microtexture regions (MTRs) within the titanium material. The approach and results are important with respect to understanding the root-cause backscatter signal responses generated in aerospace engine materials, where model-assisted methods are being used to understand the probabilistic nature of the backscatter signal content. Wavefield imaging methods are shown to be an effective means for corroborating and validating important forward model predictions in a direct manner using time- and spatially-resolved displacement field amplitude measurements.

  3. Content validity across methods of malnutrition assessment in patients with cancer is limited.

    PubMed

    Sealy, Martine J; Nijholt, Willemke; Stuiver, Martijn M; van der Berg, Marit M; Roodenburg, Jan L N; van der Schans, Cees P; Ottery, Faith D; Jager-Wittenaar, Harriët

    2016-08-01

    To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Systematic review of studies in cancer patients that operationalized malnutrition as a variable, published since 1998. Eleven key concepts, within the three domains reflected by the malnutrition definitions acknowledged by European Society for Clinical Nutrition and Metabolism (ESPEN) and the American Society for Parenteral and Enteral Nutrition (ASPEN): A: nutrient balance; B: changes in body shape, body area and body composition; and C: function, were used to classify content validity of methods to assess malnutrition. Content validity indices (M-CVIA-C) were calculated per assessment method. Acceptable content validity was defined as M-CVIA-C ≥ 0.80. Thirty-seven assessment methods were identified in the 160 included articles. Mini Nutritional Assessment (M-CVIA-C = 0.72), Scored Patient-Generated Subjective Global Assessment (M-CVIA-C = 0.61), and Subjective Global Assessment (M-CVIA-C = 0.53) scored highest M-CVIA-C. A large number of malnutrition assessment methods are used in cancer research. Content validity of these methods varies widely. None of these assessment methods has acceptable content validity, when compared against a construct based on ESPEN and ASPEN definitions of malnutrition. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    PubMed Central

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  5. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the

  6. Reservoir water level forecasting using group method of data handling

    NASA Astrophysics Data System (ADS)

    Zaji, Amir Hossein; Bonakdari, Hossein; Gharabaghi, Bahram

    2018-06-01

    Accurately forecasted reservoir water level is among the most vital data for efficient reservoir structure design and management. In this study, the group method of data handling is combined with the minimum description length method to develop a very practical and functional model for predicting reservoir water levels. The models' performance is evaluated using two groups of input combinations based on recent days and recent weeks. Four different input combinations are considered in total. The data collected from Chahnimeh#1 Reservoir in eastern Iran are used for model training and validation. To assess the models' applicability in practical situations, the models are made to predict a non-observed dataset for the nearby Chahnimeh#4 Reservoir. According to the results, input combinations (L, L -1) and (L, L -1, L -12) for recent days with root-mean-squared error (RMSE) of 0.3478 and 0.3767, respectively, outperform input combinations (L, L -7) and (L, L -7, L -14) for recent weeks with RMSE of 0.3866 and 0.4378, respectively, with the dataset from https://www.typingclub.com/st. Accordingly, (L, L -1) is selected as the best input combination for making 7-day ahead predictions of reservoir water levels.

  7. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  8. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  9. Validated stability-indicating spectrofluorimetric methods for the determination of ebastine in pharmaceutical preparations

    PubMed Central

    2011-01-01

    Two sensitive, selective, economic, and validated spectrofluorimetric methods were developed for the determination of ebastine (EBS) in pharmaceutical preparations depending on reaction with its tertiary amino group. Method I involves condensation of the drug with mixed anhydrides (citric and acetic anhydrides) producing a product with intense fluorescence, which was measured at 496 nm after excitation at 388 nm. Method (IIA) describes quantitative fluorescence quenching of eosin upon addition of the studied drug where the decrease in the fluorescence intensity was directly proportional to the concentration of ebastine; the fluorescence quenching was measured at 553 nm after excitation at 457 nm. This method was extended to (Method IIB) to apply first and second derivative synchronous spectrofluorimetric method (FDSFS & SDSFS) for the simultaneous analysis of EBS in presence of its alkaline, acidic, and UV degradation products. The proposed methods were successfully applied for the determination of the studied compound in its dosage forms. The results obtained were in good agreement with those obtained by a comparison method. Both methods were utilized to investigate the kinetics of the degradation of the drug. PMID:21385439

  10. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  12. Method development and validation of potent pyrimidine derivative by UV-VIS spectrophotometer.

    PubMed

    Chaudhary, Anshu; Singh, Anoop; Verma, Prabhakar Kumar

    2014-12-01

    A rapid and sensitive ultraviolet-visible (UV-VIS) spectroscopic method was developed for the estimation of pyrimidine derivative 6-Bromo-3-(6-(2,6-dichlorophenyl)-2-(morpolinomethylamino) pyrimidine4-yl) -2H-chromen-2-one (BT10M) in bulk form. Pyrimidine derivative was monitored at 275 nm with UV detection, and there is no interference of diluents at 275 nm. The method was found to be linear in the range of 50 to 150 μg/ml. The accuracy and precision were determined and validated statistically. The method was validated as a guideline. The results showed that the proposed method is suitable for the accurate, precise, and rapid determination of pyrimidine derivative. Graphical Abstract Method development and validation of potent pyrimidine derivative by UV spectroscopy.

  13. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  14. Validation of orthopedic postoperative pain assessment methods for dogs: a prospective, blinded, randomized, placebo-controlled study.

    PubMed

    Rialland, Pascale; Authier, Simon; Guillot, Martin; Del Castillo, Jérôme R E; Veilleux-Lemieux, Daphnée; Frank, Diane; Gauvin, Dominique; Troncy, Eric

    2012-01-01

    In the context of translational research, there is growing interest in studying surgical orthopedic pain management approaches that are common to humans and dogs. The validity of postoperative pain assessment methods is uncertain with regards to responsiveness and the potential interference of analgesia. The hypothesis was that video analysis (as a reference), electrodermal activity, and two subjective pain scales (VAS and 4A-VET) would detect different levels of pain intensity in dogs after a standardized trochleoplasty procedure. In this prospective, blinded, randomized study, postoperative pain was assessed in 25 healthy dogs during a 48-hour time frame (T). Pain was managed with placebo (Group 1, n = 10), preemptive and multimodal analgesia (Group 2, n = 5), or preemptive analgesia consisting in oral tramadol (Group 3, n = 10). Changes over time among groups were analyzed using generalized estimating equations. Multivariate regression tested the significance of relationships between pain scales and video analysis. Video analysis identified that one orthopedic behavior, namely 'Walking with full weight bearing' of the operated leg, decreased more in Group 1 at T24 (indicative of pain), whereas three behaviors indicative of sedation decreased in Group 2 at T24 (all p<0.004). Electrodermal activity was higher in Group 1 than in Groups 2 and 3 until T1 (p<0.0003). The VAS was not responsive. 4A-VET showed divergent results as its orthopedic component (4A-VETleg) detected lower pain in Group 2 until T12 (p<0.0009), but its interactive component (4A-VETbeh) was increased in Group 2 from T12 to T48 (p<0.001). Concurrent validity established that 4A-VETleg scores the painful orthopedic condition accurately and that pain assessment through 4A-VETbeh and VAS was severely biased by the sedative side-effect of the analgesics. Finally, the video analysis offered a concise template for assessment in dogs with acute orthopedic pain. However, subjective pain

  15. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    PubMed

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  16. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  17. Session-RPE Method for Training Load Monitoring: Validity, Ecological Usefulness, and Influencing Factors

    PubMed Central

    Haddad, Monoem; Stylianides, Georgios; Djaoui, Leo; Dellal, Alexandre; Chamari, Karim

    2017-01-01

    Purpose: The aim of this review is to (1) retrieve all data validating the Session-rating of perceived exertion (RPE)-method using various criteria, (2) highlight the rationale of this method and its ecological usefulness, and (3) describe factors that can alter RPE and users of this method should take into consideration. Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE. Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10. Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults) among various expertise levels. This method could be used as “standing alone” method for training load (TL) monitoring purposes though some recommend to combine it with other physiological parameters as heart rate. PMID:29163016

  18. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    ERIC Educational Resources Information Center

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  19. Validity of a Measure of Assertiveness

    ERIC Educational Resources Information Center

    Galassi, John P.; Galassi, Merna D.

    1974-01-01

    This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)

  20. Validity and reliability assessment of a peer evaluation method in team-based learning classes.

    PubMed

    Yoon, Hyun Bae; Park, Wan Beom; Myung, Sun-Jung; Moon, Sang Hui; Park, Jun-Bean

    2018-03-01

    Team-based learning (TBL) is increasingly employed in medical education because of its potential to promote active group learning. In TBL, learners are usually asked to assess the contributions of peers within their group to ensure accountability. The purpose of this study is to assess the validity and reliability of a peer evaluation instrument that was used in TBL classes in a single medical school. A total of 141 students were divided into 18 groups in 11 TBL classes. The students were asked to evaluate their peers in the group based on evaluation criteria that were provided to them. We analyzed the comments that were written for the highest and lowest achievers to assess the validity of the peer evaluation instrument. The reliability of the instrument was assessed by examining the agreement among peer ratings within each group of students via intraclass correlation coefficient (ICC) analysis. Most of the students provided reasonable and understandable comments for the high and low achievers within their group, and most of those comments were compatible with the evaluation criteria. The average ICC of each group ranged from 0.390 to 0.863, and the overall average was 0.659. There was no significant difference in inter-rater reliability according to the number of members in the group or the timing of the evaluation within the course. The peer evaluation instrument that was used in the TBL classes was valid and reliable. Providing evaluation criteria and rules seemed to improve the validity and reliability of the instrument.

  1. The face of pain--a pilot study to validate the measurement of facial pain expression with an improved electromyogram method.

    PubMed

    Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus

    2005-01-01

    The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.

  2. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    NASA Astrophysics Data System (ADS)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  3. Groping My Way through the Group Method.

    ERIC Educational Resources Information Center

    Kinnick, B. Jo

    1995-01-01

    Reprints an article originally published in 1951. Argues that the group method should not be foisted on young teachers as the only way to teach. Notes that the group method requires a great deal of preparation and much teacher direction. Suggests that smart teachers will continue to use a variety of teaching methods. (RS)

  4. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  5. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2017-07-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.

  6. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  7. A digital photographic measurement method for quantifying foot posture: validity, reliability, and descriptive data.

    PubMed

    Cobb, Stephen C; James, C Roger; Hjertstedt, Matthew; Kruk, James

    2011-01-01

    Although abnormal foot posture long has been associated with lower extremity injury risk, the evidence is equivocal. Poor intertester reliability of traditional foot measures might contribute to the inconsistency. To investigate the validity and reliability of a digital photographic measurement method (DPMM) technology, the reliability of DPMM-quantified foot measures, and the concurrent validity of the DPMM with clinical-measurement methods (CMMs) and to report descriptive data for DPMM measures with moderate to high intratester and intertester reliability. Descriptive laboratory study. Biomechanics research laboratory. A total of 159 people participated in 3 groups. Twenty-eight people (11 men, 17 women; age  =  25 ± 5 years, height  =  1.71 ± 0.10 m, mass  =  77.6 ± 17.3 kg) were recruited for investigation of intratester and intertester reliability of the DPMM technology; 20 (10 men, 10 women; age  =  24 ± 2 years, height  =  1.71 ± 0.09 m, mass  =  76 ± 16 kg) for investigation of DPMM and CMM reliability and concurrent validity; and 111 (42 men, 69 women; age  =  22.8 ± 4.7 years, height  =  168.5 ± 10.4 cm, mass  =  69.8 ± 13.3 kg) for development of a descriptive data set of the DPMM foot measurements with moderate to high intratester and intertester reliabilities. The dimensions of 10 model rectangles and the 28 participants' feet were measured, and DPMM foot posture was measured in the 111 participants. Two clinicians assessed the DPMM and CMM foot measures of the 20 participants. Validity and reliability were evaluated using mean absolute and percentage errors and intraclass correlation coefficients. Descriptive data were computed from the DPMM foot posture measures. The DPMM technology intratester and intertester reliability intraclass correlation coefficients were 1.0 for each tester and variable. Mean absolute errors were equal to or less than 0.2 mm for the bottom and right-side variables and 0.1° for the

  8. Method of manufacturing semiconductor having group II-group VI compounds doped with nitrogen

    DOEpatents

    Compaan, Alvin D.; Price, Kent J.; Ma, Xianda; Makhratchev, Konstantin

    2005-02-08

    A method of making a semiconductor comprises depositing a group II-group VI compound onto a substrate in the presence of nitrogen using sputtering to produce a nitrogen-doped semiconductor. This method can be used for making a photovoltaic cell using sputtering to apply a back contact layer of group II-group VI compound to a substrate in the presence of nitrogen, the back coating layer being doped with nitrogen. A semiconductor comprising a group II-group VI compound doped with nitrogen, and a photovoltaic cell comprising a substrate on which is deposited a layer of a group II-group VI compound doped with nitrogen, are also included.

  9. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  10. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  11. Thyroid-specific questions on work ability showed known-groups validity among Danes with thyroid diseases.

    PubMed

    Nexo, Mette Andersen; Watt, Torquil; Bonnema, Steen Joop; Hegedüs, Laszlo; Rasmussen, Åse Krogh; Feldt-Rasmussen, Ulla; Bjorner, Jakob Bue

    2015-07-01

    We aimed to identify the best approach to work ability assessment in patients with thyroid disease by evaluating the factor structure, measurement equivalence, known-groups validity, and predictive validity of a broad set of work ability items. Based on the literature and interviews with thyroid patients, 24 work ability items were selected from previous questionnaires, revised, or developed anew. Items were tested among 632 patients with thyroid disease (non-toxic goiter, toxic nodular goiter, Graves' disease (with or without orbitopathy), autoimmune hypothyroidism, and other thyroid diseases), 391 of which had participated in a study 5 years previously. Responses to select items were compared to general population data. We used confirmatory factor analyses for categorical data, logistic regression analyses and tests of differential item function, and head-to-head comparisons of relative validity in distinguishing known groups. Although all work ability items loaded on a common factor, the optimal factor solution included five factors: role physical, role emotional, thyroid-specific limitations, work limitations (without disease attribution), and work performance. The scale on thyroid-specific limitations showed the most power in distinguishing clinical groups and time since diagnosis. A global single item proved useful for comparisons with the general population, and a thyroid-specific item predicted labor market exclusion within the next 5 years (OR 5.0, 95 % CI 2.7-9.1). Items on work limitations with attribution to thyroid disease were most effective in detecting impact on work ability and showed good predictive validity. Generic work ability items remain useful for general population comparisons.

  12. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  13. Determination of vitamin C in foods: current state of method validation.

    PubMed

    Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C

    2014-11-21

    Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review. Copyright © 2014. Published by Elsevier B.V.

  14. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  15. [Method for evaluating the competence of specialists--the validation of 360-degree-questionnaire].

    PubMed

    Nørgaard, Kirsten; Pedersen, Juri; Ravn, Lisbeth; Albrecht-Beste, Elisabeth; Holck, Kim; Fredløv, Maj; Møller, Lars Krag

    2010-04-19

    Assessment of physicians' performance focuses on the quality of their work. The aim of this study was to develop a valid, usable and acceptable multisource feedback assessment tool (MFAT) for hospital consultants. Statements were produced on consultant competencies within non-medical areas like collaboration, professionalism, communication, health promotion, academics and administration. The statements were validated by physicians and later by non-physician professionals after adjustments had been made. In a pilot test, a group of consultants was assessed using the final collection of statements of the MFAT. They received a report with their personal results and subsequently evaluated the assessment method. In total, 66 statements were developed and after validation they were reduced and reformulated to 35. Mean scores for relevance and "easy to understand" of the statements were in the range between "very high degree" and "high degree". In the pilot test, 18 consultants were assessed by themselves, by 141 other physicians and by 125 other professionals in the hospital. About two thirds greatly benefited of the assessment report and half identified areas for personal development. About a third did not want the head of their department to know the assessment results directly; however, two thirds found a potential value in discussing the results with the head. We developed an MFAT for consultants with relevant and understandable statements. A pilot test confirmed that most of the consultants gained from the assessment, but some did not like to share their results with their heads. For these specialists other methods should be used.

  16. Galileo FOC Satellite Group Delay Estimation based on Raw Method and published IOV Metadata

    NASA Astrophysics Data System (ADS)

    Reckeweg, Florian; Schönemann, Erik; Springer, Tim; Enderle, Werner

    2017-04-01

    In December 2016, the European GNSS Agency (GSA) published the Galileo In-Orbit Validation (IOV) satellite metadata. These metadata include among others the so-called Galileo satellite group delays, which were measured in an absolute sense by the satellite manufacturer on-ground for all three Galileo frequency bands E1, E5 and E6. Therewith Galileo is the first Global Navigation Satellite System (GNSS) for which absolute calibration values for satellite on-board group delays have been published. The different satellite group delays for the three frequency bands lead to the fact that the signals will not be transmitted at exactly the same epoch. Up to now, due to the lack of absolute group delays, it is common practice in GNSS analyses to estimate and apply the differences of these satellite group delays, commonly known as differential code biases (DCBs). However, this has the drawback that the determination of the "raw" clock and the absolute ionosphere is not possible. The use of absolute bias calibrations for satellites and receivers is a major step into the direction of more realistic (in a physical sense) clock and atmosphere estimates. The Navigation Support Office at the European Space Operation Centre (ESOC) was from the beginning involved in the validation process of the Galileo metadata. For the work presented in this presentation we will use the absolute bias calibrations of the Galileo IOV satellites to estimate and validate the absolute receiver group delays of the ESOC GNSS network and vice versa. The receiver group delays have exemplarily been calibrated in a calibration campaign with an IFEN GNSS Signal-Simulator at ESOC. Based on the calibrated network, making use of the ionosphere constraints given by the IOV satellites, GNSS raw observations are processed to estimate satellite group delays for the operational Galileo (Full Operational Capability) FOC satellites. In addition, "raw" satellite clock offsets are estimated, which are free of the

  17. [Data validation methods and discussion on Chinese materia medica resource survey].

    PubMed

    Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing

    2013-07-01

    From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.

  18. VALUE - Validating and Integrating Downscaling Methods for Climate Change Research

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Benestad, Rasmus; Kotlarski, Sven; Huth, Radan; Hertig, Elke; Wibig, Joanna; Gutierrez, Jose

    2013-04-01

    Our understanding of global climate change is mainly based on General Circulation Models (GCMs) with a relatively coarse resolution. Since climate change impacts are mainly experienced on regional scales, high-resolution climate change scenarios need to be derived from GCM simulations by downscaling. Several projects have been carried out over the last years to validate the performance of statistical and dynamical downscaling, yet several aspects have not been systematically addressed: variability on sub-daily, decadal and longer time-scales, extreme events, spatial variability and inter-variable relationships. Different downscaling approaches such as dynamical downscaling, statistical downscaling and bias correction approaches have not been systematically compared. Furthermore, collaboration between different communities, in particular regional climate modellers, statistical downscalers and statisticians has been limited. To address these gaps, the EU Cooperation in Science and Technology (COST) action VALUE (www.value-cost.eu) has been brought into life. VALUE is a research network with participants from currently 23 European countries running from 2012 to 2015. Its main aim is to systematically validate and develop downscaling methods for climate change research in order to improve regional climate change scenarios for use in climate impact studies. Inspired by the co-design idea of the international research initiative "future earth", stakeholders of climate change information have been involved in the definition of research questions to be addressed and are actively participating in the network. The key idea of VALUE is to identify the relevant weather and climate characteristics required as input for a wide range of impact models and to define an open framework to systematically validate these characteristics. Based on a range of benchmark data sets, in principle every downscaling method can be validated and compared with competing methods. The results of

  19. ePortfolios: The Method of Choice for Validation

    ERIC Educational Resources Information Center

    Scott, Ken; Kim, Jichul

    2015-01-01

    Community colleges have long been institutions of higher education in the arenas of technical education and training, as well as preparing students for transfer to universities. While students are engaged in their student learning outcomes, projects, research, and community service, how have these students validated their work? One method of…

  20. Descriptive analysis of the verbal behavior of a therapist: a known-group validity analysis of the putative behavioral functions involved in clinical interaction.

    PubMed

    Virues-Ortega, Javier; Montaño-Fidalgo, Montserrat; Froján-Parga, María Xesús; Calero-Elvira, Ana

    2011-12-01

    This study analyzes the interobserver agreement and hypothesis-based known-group validity of the Therapist's Verbal Behavior Category System (SISC-INTER). The SISC-INTER is a behavioral observation protocol comprised of a set of verbal categories representing putative behavioral functions of the in-session verbal behavior of a therapist (e.g., discriminative, reinforcing, punishing, and motivational operations). The complete therapeutic process of a clinical case of an individual with marital problems was recorded (10 sessions, 8 hours), and data were arranged in a temporal sequence using 10-min periods. Hypotheses based on the expected performance of the putative behavioral functions portrayed by the SISC-INTER codes across prevalent clinical activities (i.e., assessing, explaining, Socratic method, providing clinical guidance) were tested using autoregressive integrated moving average (ARIMA) models. Known-group validity analyses provided support to all hypotheses. The SISC-INTER may be a useful tool to describe therapist-client interaction in operant terms. The utility of reliable and valid protocols for the descriptive analysis of clinical practice in terms of verbal behavior is discussed. Copyright © 2011. Published by Elsevier Ltd.

  1. Method for appraising model validity of randomised controlled trials of homeopathic treatment: multi-rater concordance study

    PubMed Central

    2012-01-01

    Background A method for assessing the model validity of randomised controlled trials of homeopathy is needed. To date, only conventional standards for assessing intrinsic bias (internal validity) of trials have been invoked, with little recognition of the special characteristics of homeopathy. We aimed to identify relevant judgmental domains to use in assessing the model validity of homeopathic treatment (MVHT). We define MVHT as the extent to which a homeopathic intervention and the main measure of its outcome, as implemented in a randomised controlled trial (RCT), reflect 'state-of-the-art' homeopathic practice. Methods Using an iterative process, an international group of experts developed a set of six judgmental domains, with associated descriptive criteria. The domains address: (I) the rationale for the choice of the particular homeopathic intervention; (II) the homeopathic principles reflected in the intervention; (III) the extent of homeopathic practitioner input; (IV) the nature of the main outcome measure; (V) the capability of the main outcome measure to detect change; (VI) the length of follow-up to the endpoint of the study. Six papers reporting RCTs of homeopathy of varying design were randomly selected from the literature. A standard form was used to record each assessor's independent response per domain, using the optional verdicts 'Yes', 'Unclear', 'No'. Concordance among the eight verdicts per domain, across all six papers, was evaluated using the kappa (κ) statistic. Results The six judgmental domains enabled MVHT to be assessed with 'fair' to 'almost perfect' concordance in each case. For the six RCTs examined, the method allowed MVHT to be classified overall as 'acceptable' in three, 'unclear' in two, and 'inadequate' in one. Conclusion Future systematic reviews of RCTs in homeopathy should adopt the MVHT method as part of a complete appraisal of trial validity. PMID:22510227

  2. Validation of a spectrophotometric assay method for bisoprolol using picric acid.

    PubMed

    Panainte, Alina-Diana; Bibire, Nela; Tântaru, Gladiola; Apostu, M; Vieriu, Mădălina

    2013-01-01

    Bisoprolol is a drug belonging to beta blockers drugs used primarily for the treatment of cardiovascular diseases. A spectrophotometric method for quantitative determination of bisoprolol was developed based on the formation of a complex combination between bisoprolol and picric acid. The complex combination of bisoprolol and picric acid has a maximum absorbance peak at 420 nm. Optimum working conditions were established and the method was validated. The method presented a good linearity in the concentration range 5-120 microg/ml (regression coefficient r2 = 0.9992). The RSD for the precision of the method was 1.74 and for the intermediate precision 1.43, and recovery values ranged between 98.25-101.48%. The proposed and validated spectrophotometric method for the determination of bisoprolol is simple and cost effective.

  3. Challenges to validity in single-group interrupted time series analysis.

    PubMed

    Linden, Ariel

    2017-04-01

    Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is studied; the outcome variable is serially ordered as a time series, and the intervention is expected to "interrupt" the level and/or trend of the time series, subsequent to its introduction. The most common threat to validity is history-the possibility that some other event caused the observed effect in the time series. Although history limits the ability to draw causal inferences from single ITSA models, it can be controlled for by using a comparable control group to serve as the counterfactual. Time series data from 2 natural experiments (effect of Florida's 2000 repeal of its motorcycle helmet law on motorcycle fatalities and California's 1988 Proposition 99 to reduce cigarette sales) are used to illustrate how history biases results of single-group ITSA results-as opposed to when that group's results are contrasted to those of a comparable control group. In the first example, an external event occurring at the same time as the helmet repeal appeared to be the cause of a rise in motorcycle deaths, but was only revealed when Florida was contrasted with comparable control states. Conversely, in the second example, a decreasing trend in cigarette sales prior to the intervention raised question about a treatment effect attributed to Proposition 99, but was reinforced when California was contrasted with comparable control states. Results of single-group ITSA should be considered preliminary, and interpreted with caution, until a more robust study design can be implemented. © 2016 John Wiley & Sons, Ltd.

  4. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    PubMed

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A Comparison of Two-Group Classification Methods

    ERIC Educational Resources Information Center

    Holden, Jocelyn E.; Finch, W. Holmes; Kelley, Ken

    2011-01-01

    The statistical classification of "N" individuals into "G" mutually exclusive groups when the actual group membership is unknown is common in the social and behavioral sciences. The results of such classification methods often have important consequences. Among the most common methods of statistical classification are linear discriminant analysis,…

  6. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  7. Specification and Preliminary Validation of IAT (Integrated Analysis Techniques) Methods: Executive Summary.

    DTIC Science & Technology

    1985-03-01

    conceptual framwork , and preliminary validation of IAT concepts. Planned work for FY85, including more extensive validation, is also described. 20...Developments: Required Capabilities .... ......... 10 2-1 IAT Conceptual Framework - FY85 (FEO) ..... ........... 11 2-2 Recursive Nature of Decomposition...approach: 1) Identify needs & requirements for IAT. 2) Develop IAT conceptual framework. 3) Validate IAT methods. 4) Develop applications materials. To

  8. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  9. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Display format, highlight validity, and highlight method: Their effects on search performance

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim D.; Obrien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. Responses to valid applications of highlight were significantly faster than responses to non or invalidly highlighted applications. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. A separate within-subject analysis of variance of display format, highlight validity, and several highlight methods did not reveal a main effect of highlight method. In addition, observed display search times were compared to search time predicted by Tullis' Display Analysis Program. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  12. Validation of Gujarati Version of ABILOCO-Kids Questionnaire

    PubMed Central

    Diwan, Jasmin; Patel, Pankaj; Bansal, Ankita B.

    2015-01-01

    Background ABILOCO-Kids is a measure of locomotion ability for children with cerebral palsy (CP) aged 6 to 15 years & is available in English & French. Aim To validate the Gujarati version of ABILOCO-Kids questionnaire to be used in clinical research on Gujarati population. Materials and Methods ABILOCO-Kids questionnaire was translated into Gujarati from English using forward-backward-forward method. To ensure face & content validity of Gujarati version using group consensus method, each item was examined by group of experts having mean experience of 24.62 years in field of paediatric and paediatric physiotherapy. Each item was analysed for content, meaning, wording, format, ease of administration & scoring. Each item was scored by expert group as either accepted, rejected or accepted with modification. Procedure was continued until 80% of consensus for all items. Concurrent validity was examined on 55 children with Cerebral Palsy (6-15 years) of all Gross Motor Functional Classification System (GMFCS) level & all clinical types by correlating score of ABILOCO-Kids with Gross Motor Functional Measure & GMFCS. Result In phase 1 of validation, 16 items were accepted as it is; 22 items accepted with modification & 3 items went for phase 2 validation. For concurrent validity, highly significant positive correlation was found between score of ABILOCO-Kids & total GMFM (r=0.713, p<0.005) & highly significant negative correlation with GMFCS (r= -0.778, p<0.005). Conclusion Gujarati translated version of ABILOCO-Kids questionnaire has good face & content validity as well as concurrent validity which can be used to measure caregiver reported locomotion ability in children with CP. PMID:26557603

  13. Validity and reliability of a method for assessment of cervical vertebral maturation.

    PubMed

    Zhao, Xiao-Guang; Lin, Jiuxiang; Jiang, Jiu-Hui; Wang, Qingzhu; Ng, Sut Hong

    2012-03-01

    To evaluate the validity and reliability of the cervical vertebral maturation (CVM) method with a longitudinal sample. Eighty-six cephalograms from 18 subjects (5 males and 13 females) were selected from the longitudinal database. Total mandibular length was measured on each film; an increased rate served as the gold standard in examination of the validity of the CVM method. Eleven orthodontists, after receiving intensive training in the CVM method, evaluated all films twice. Kendall's W and the weighted kappa statistic were employed. Kendall's W values were higher than 0.8 at both times, indicating strong interobserver reproducibility, but interobserver agreement was documented twice at less than 50%. A wide range of intraobserver agreement was noted (40.7%-79.1%), and substantial intraobserver reproducibility was proved by kappa values (0.53-0.86). With regard to validity, moderate agreement was reported between the gold standard and observer staging at the initial time (kappa values 0.44-0.61). However, agreement seemed to be unacceptable for clinical use, especially in cervical stage 3 (26.8%). Even though the validity and reliability of the CVM method proved statistically acceptable, we suggest that many other growth indicators should be taken into consideration in evaluating adolescent skeletal maturation.

  14. A Digital Photographic Measurement Method for Quantifying Foot Posture: Validity, Reliability, and Descriptive Data

    PubMed Central

    Cobb, Stephen C.; James, C. Roger; Hjertstedt, Matthew; Kruk, James

    2011-01-01

    Abstract Context: Although abnormal foot posture long has been associated with lower extremity injury risk, the evidence is equivocal. Poor intertester reliability of traditional foot measures might contribute to the inconsistency. Objectives: To investigate the validity and reliability of a digital photographic measurement method (DPMM) technology, the reliability of DPMM-quantified foot measures, and the concurrent validity of the DPMM with clinical-measurement methods (CMMs) and to report descriptive data for DPMM measures with moderate to high intratester and intertester reliability. Design: Descriptive laboratory study. Setting: Biomechanics research laboratory. Patients or Other Participants: A total of 159 people participated in 3 groups. Twenty-eight people (11 men, 17 women; age  =  25 ± 5 years, height  =  1.71 ± 0.10 m, mass  =  77.6 ± 17.3 kg) were recruited for investigation of intratester and intertester reliability of the DPMM technology; 20 (10 men, 10 women; age  =  24 ± 2 years, height  =  1.71 ± 0.09 m, mass  =  76 ± 16 kg) for investigation of DPMM and CMM reliability and concurrent validity; and 111 (42 men, 69 women; age  =  22.8 ± 4.7 years, height  =  168.5 ± 10.4 cm, mass  =  69.8 ± 13.3 kg) for development of a descriptive data set of the DPMM foot measurements with moderate to high intratester and intertester reliabilities. Intervention(s): The dimensions of 10 model rectangles and the 28 participants' feet were measured, and DPMM foot posture was measured in the 111 participants. Two clinicians assessed the DPMM and CMM foot measures of the 20 participants. Main Outcome Measure(s): Validity and reliability were evaluated using mean absolute and percentage errors and intraclass correlation coefficients. Descriptive data were computed from the DPMM foot posture measures. Results: The DPMM technology intratester and intertester reliability intraclass correlation coefficients were 1.0 for

  15. Item difficulty and item validity for the Children's Group Embedded Figures Test.

    PubMed

    Rusch, R R; Trigg, C L; Brogan, R; Petriquin, S

    1994-02-01

    The validity and reliability of the Children's Group Embedded Figures Test was reported for students in Grade 2 by Cromack and Stone in 1980; however, a search of the literature indicates no evidence for internal consistency or item analysis. Hence the purpose of this study was to examine the item difficulty and item validity of the test with children in Grades 1 and 2. Confusion in the literature over development and use of this test was seemingly resolved through analysis of these descriptions and through an interview with the test developer. One early-appearing item was unreasonably difficult. Two or three other items were quite difficult and made little contribution to the total score. Caution is recommended, however, in any reordering or elimination of items based on these findings, given the limited number of subjects (n = 84).

  16. Validation of a Group-Administered Pictorial Dietary Recall with 9- to 11-Year-Old Children

    ERIC Educational Resources Information Center

    Wallen, Victoria; Cunningham-Sabo, Leslie; Auld, Garry; Romaniello, Cathy

    2011-01-01

    Objective: Determine validity of Day in the Life Questionnaire-Colorado (DILQ-CO) as a dietary assessment tool for classroom-administered use. Methods: Agreement between DILQ-CO responses and weighed plate waste measured in 125 fourth-grade students in 2 low-income schools. Validity assessed by comparing reported school lunch items and portion…

  17. Development, validation and matrix effect of a QuEChERS method for the analysis of organochlorine pesticides in fish tissue.

    PubMed

    Stremel, Tatiana R De O; Domingues, Cinthia E; Zittel, Rosimara; Silva, Cleber P; Weinert, Patricia L; Monteiro, Franciele C; Campos, Sandro X

    2018-04-03

    This study aims to develop and validate a method to determine OCPs in fish tissues, minimizing the consumption of sample and reagents, by using a modified QuEChERS along with ultrasound, d-SPE and gas chromatography with an electron capture detector (GC-ECD), refraining the pooling. Different factorial designs were employed to optimize the sample preparation phase. The validation method presented a recovery of around 77.3% and 110.8%, with RSD lower than 13% and the detection limits were between 0.24 and 2.88 μgkg -1 , revealing good sensitiveness and accuracy. The method was satisfactorily applied to the analysis of tissues from different species of fish and OCPs residues were detected. The proposed method was shown effective to determine OCPs low concentrations in fish tissues, using small sample mass (0.5 g), making the sample analyses viable without the need for grouping (pool).

  18. Practicable group testing method to evaluate weight/weight GMO content in maize grains.

    PubMed

    Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi

    2011-07-13

    Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.

  19. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  20. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    ERIC Educational Resources Information Center

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  1. Factorial Validity, Reliability, and Measurement Equivalence of the Noctcaelador Inventory across Three Ethnic Groups

    ERIC Educational Resources Information Center

    Kelly, William E.

    2008-01-01

    This study examined the reliability, factorial validity, and measurement equivalence of the Noctcaelador Inventory (NI) among three ethnic groups of college students. Participants included 200 Whites, 200 African Americans, and 200 Latino/Hispanics. The results indicated that although the African American sample scored slightly lower than the…

  2. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method

    PubMed Central

    2017-01-01

    Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant

  3. Risky Group Decision-Making Method for Distribution Grid Planning

    NASA Astrophysics Data System (ADS)

    Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang

    2015-12-01

    With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.

  4. Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs

    ERIC Educational Resources Information Center

    Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.

    2005-01-01

    The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…

  5. Electronic Dietary Intake Assessment (e-DIA): relative validity of a mobile phone application to measure intake of food groups.

    PubMed

    Rangan, Anna M; Tieleman, Laurissa; Louie, Jimmy C Y; Tang, Lie Ming; Hebden, Lana; Roy, Rajshri; Kay, Judy; Allman-Farinelli, Margaret

    2016-06-01

    Automation of dietary assessment can reduce limitations of established methodologies, by alleviating participant and researcher burden. Designed as a research tool, the electronic Dietary Intake Assessment (e-DIA) is a food record in mobile phone application format. The present study aimed to examine the relative validity of the e-DIA with the 24-h recall method to estimate intake of food groups. A sample of eighty university students aged 19-24 years recorded 5 d of e-DIA and 3 d of recall within this 5-d period. The three matching days of dietary data were used for analysis. Food intake data were disaggregated and apportioned to one of eight food groups. Median intakes of food groups were similar between the methods, and strong correlations were found (mean: 0·79, range: 0·69-0·88). Cross-classification by tertiles produced a high level of exact agreement (mean: 71 %, range: 65-75 %), and weighted κ values were moderate to good (range: 0·54-0·71). Although mean differences (e-DIA-recall) were small (range: -13 to 23 g), limits of agreement (LOA) were relatively large (e.g. for vegetables, mean difference: -4 g, LOA: -159 to 151 g). The Bland-Altman plots showed robust agreement, with minimum bias. This analysis supports the use of e-DIA as an alternative to the repeated 24-h recall method for ranking individuals' food group intake.

  6. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  7. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  8. Extrapolating a Dyadic Model to Small Group Methodology: Validation of the Spitzberg and Cupach Model of Communication Competence.

    ERIC Educational Resources Information Center

    Keyton, Joann

    A study assessed the validity of applying the Spitzberg and Cupach dyadic model of communication competence to small group interaction. Twenty-four students, in five task-oriented work groups, completed questionnaires concerning self-competence, alter competence, interaction effectiveness, and other group members' interaction appropriateness. They…

  9. Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.

    PubMed

    Morgeaux, S; Variot, P; Daas, A; Costanzo, A

    2013-01-01

    The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).

  10. 78 FR 20695 - Walk-Through Metal Detectors and Hand-Held Metal Detectors Test Method Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ... Detectors and Hand-Held Metal Detectors Test Method Validation AGENCY: National Institute of Justice, DOJ... ensure that the test methods in the standards are properly documented, NIJ is requesting proposals (including price quotes) for test method validation efforts from testing laboratories. NIJ is also seeking...

  11. Temperature-Dependent Estimation of Gibbs Energies Using an Updated Group-Contribution Method.

    PubMed

    Du, Bin; Zhang, Zhen; Grubner, Sharon; Yurkovich, James T; Palsson, Bernhard O; Zielinski, Daniel C

    2018-06-05

    Reaction-equilibrium constants determine the metabolite concentrations necessary to drive flux through metabolic pathways. Group-contribution methods offer a way to estimate reaction-equilibrium constants at wide coverage across the metabolic network. Here, we present an updated group-contribution method with 1) additional curated thermodynamic data used in fitting and 2) capabilities to calculate equilibrium constants as a function of temperature. We first collected and curated aqueous thermodynamic data, including reaction-equilibrium constants, enthalpies of reaction, Gibbs free energies of formation, enthalpies of formation, entropy changes of formation of compounds, and proton- and metal-ion-binding constants. Next, we formulated the calculation of equilibrium constants as a function of temperature and calculated the standard entropy change of formation (Δ f S ∘ ) using a model based on molecular properties. The median absolute error in estimating Δ f S ∘ was 0.013 kJ/K/mol. We also estimated magnesium binding constants for 618 compounds using a linear regression model validated against measured data. We demonstrate the improved performance of the current method (8.17 kJ/mol in median absolute residual) over the current state-of-the-art method (11.47 kJ/mol) in estimating the 185 new reactions added in this work. The efforts here fill in gaps for thermodynamic calculations under various conditions, specifically different temperatures and metal-ion concentrations. These, to our knowledge, new capabilities empower the study of thermodynamic driving forces underlying the metabolic function of organisms living under diverse conditions. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Validity of a digital diet estimation method for use with preschool children

    USDA-ARS?s Scientific Manuscript database

    The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...

  13. Optimization and validation of a method using UHPLC-fluorescence for the analysis of polycyclic aromatic hydrocarbons in cold-pressed vegetable oils.

    PubMed

    Silva, Simone Alves da; Sampaio, Geni Rodrigues; Torres, Elizabeth Aparecida Ferraz da Silva

    2017-04-15

    Among the different food categories, the oils and fats are important sources of exposure to polycyclic aromatic hydrocarbons (PAHs), a group of organic chemical contaminants. The use of a validated method is essential to obtain reliable analytical results since the legislation establishes maximum limits in different foods. The objective of this study was to optimize and validate a method for the quantification of four PAHs [benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(a)pyrene] in vegetable oils. The samples were submitted to liquid-liquid extraction, followed by solid-phase extraction, and analyzed by ultra-high performance liquid chromatography. Under the optimized conditions, the validation parameters were evaluated according to the INMETRO Guidelines: linearity (r2 >0.99), selectivity (no matrix interference), limits of detection (0.08-0.30μgkg -1 ) and quantification (0.25-1.00μgkg -1 ), recovery (80.13-100.04%), repeatability and intermediate precision (<10% RSD). The method was found to be adequate for routine analysis of PAHs in the vegetable oils evaluated. Copyright © 2016. Published by Elsevier Ltd.

  14. Validation of beverage intake methods vs. hydration biomarkers; a short review.

    PubMed

    Nissensohn, Mariela; Ruano, Cristina; Serra-Majem, Lluis

    2013-11-01

    Fluid intake is difficult to monitor. Biomarkers of beverage intake are able to assess dietary intake/hydration status without the bias of self-reported dietary intake errors and also the intra-individual variability. Various markers have been proposed to assess hydration, however, to date; there is a lack of universally accepted biomarker that reflects changes of hydration status in response to changes in beverage intake. We conduct a review to find out the questionnaires of beverage intake available in the scientific literature to assess beverage intake and hydration status and their validation against hydration biomarkers. A scientific literature search was conducted. Only two articles were selected, in which, two different beverage intake questionnaires designed to capture the usual beverage intake were validated against Urine Specific Gravidity biomarker (Usg). Water balance questionnaire (WBQ) reported no correlations in the first study and the Beverage Intake Questionnaire (BEVQ), a quantitative Food frequency questionnaire (FFQ) in the second study, also found a negative correlation. FFQ appears to measure better beverage intake than WBQ when compared with biomarkers. However, the WBQ seems to be a more complete method to evaluate the hydration balance of a given population. Further research is needed to understand the meaning of the different correlations between intake estimates and biomarkers of beverage in distinct population groups and environments. Copyright AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  15. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  16. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...

  17. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  18. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  19. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  20. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  1. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    PubMed

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  2. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset.

    PubMed

    Neto, Joana P; Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R

    2016-08-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo "paired-recordings" such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. Copyright © 2016 the American Physiological Society.

  3. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset

    PubMed Central

    Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R.

    2016-01-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo “paired-recordings” such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. PMID:27306671

  4. Validation of the Use of Dried Blood Spot (DBS) Method to Assess Vitamin A Status

    PubMed Central

    Fallah, Elham; Peighambardoust, Seyed Hadi

    2012-01-01

    Background: Vitamin A deficiency is an important dietary deficiency in the world. Thus, the ne¬cessity of screening for deficient populations is obvious. This paper introduces a fast, cheap and relatively reliable method called “dried blood spot” (DBS) method in screening the deficient populations. The validity of this method for retinol measurement was investigated. Method: The “precision” and “agreement” criteria of the DBS method were assessed. The preci¬sion was calculated and compared with those of plasma using F-test. The agreement was eva¬luated using Bland-Altman plot. Results: The imprecision of retinol measurements in dried spots was not significantly different from those of the control (plasma). A good correlation coefficient (r2=0.78) was obtained for dried spots’ retinol measurements versus plasma’s retinol analysis (P < 0.01). Paired t-test showed no significant difference between the DBS and retinol methods on a group level. Imprecision of DBS measurement was acceptable, compared to that of the plasma method. The difference be¬tween these two methods was not statistically significant on a group level. Conclusion: Application of DBS standard samples, in which a part of the plasma was replaced with the artificial plasma, was shown to be a reliable calibration mean for retinol measurements in DBS samples. Retinol in dried spots was stable for 90 days. Overall, the DBS method provided a precise measurement of retinol, showing results that were comparable with the measurement of retinol in plasma. PMID:24688932

  5. Validation of the Arabic Version of the Group Personality Projective Test among university students in Bahrain.

    PubMed

    Al-Musawi, Nu'man M

    2003-04-01

    Using confirmatory factor analytic techniques on data generated from 200 students enrolled at the University of Bahrain, we obtained some construct validity and reliability data for the Arabic Version of the 1961 Group Personality Projective Test by Cassel and Khan. In contrast to the 5-factor model proposed for the Group Personality Projective Test, a 6-factor solution appeared justified for the Arabic Version of this test, suggesting some variance between the cultural groups in the United States and in Bahrain.

  6. Comprehension of Written Grammar Test: Reliability and Known-Groups Validity Study With Hearing and Deaf and Hard-of-Hearing Students.

    PubMed

    Cannon, Joanna E; Hubley, Anita M; Millhoff, Courtney; Mazlouman, Shahla

    2016-01-01

    The aim of the current study was to gather validation evidence for the Comprehension of Written Grammar (CWG; Easterbrooks, 2010) receptive test of 26 grammatical structures of English print for use with children who are deaf and hard of hearing (DHH). Reliability and validity data were collected for 98 participants (49 DHH and 49 hearing) in Grades 2-6. The objectives were to: (a) examine 4-week test-retest reliability data; and (b) provide evidence of known-groups validity by examining expected differences between the groups on the CWG vocabulary pretest and main test, as well as selected structures. Results indicated excellent test-retest reliability estimates for CWG test scores. DHH participants performed statistically significantly lower on the CWG vocabulary pretest and main test than the hearing participants. Significantly lower performance by DHH participants on most expected grammatical structures (e.g., basic sentence patterns, auxiliary "be" singular/plural forms, tense, comparatives, and complementation) also provided known groups evidence. Overall, the findings of this study showed strong evidence of the reliability of scores and known group-based validity of inferences made from the CWG. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Hair Measurements of Cortisol, DHEA, and DHEA to Cortisol Ratio as Biomarkers of Chronic Stress among People Living with HIV in China: Known-Group Validation

    PubMed Central

    Li, Xiaoming; Zilioli, Samuele; Chen, Zheng; Deng, Huihua; Pan, Juxian

    2017-01-01

    Background Existing literature suggests that endocrine measures, including the steroid hormones of cortisol and Dehydroepiandrosterone (DHEA), as well as the DHEA to cortisol ratio in the human hair can be used as promising biomarkers of chronic stress among humans. However, data are limited regarding the validity of these measures as biomarkers of chronic stress among people living with HIV (PLWH), whose endocrine system or hypothalamic pituitary adrenal (HPA) axis may be affected by HIV infection and/or antiretroviral therapy (ART) medications. Method Using hair sample data and self-reported survey from 60 PLWH in China, we examined the validity of three endocrine measures among Chinese PLWH using a known-groups validation strategy. High-stress group (n = 30) and low-stress group (n = 30) of PLWH were recruited through individual assessment interviews by a local licensed psychologist. The endocrine measures in hair were extracted and assessed by LC-APCI-MS/MS method. Both bivariate and multivariate analyses were conducted to examine the associations between the endocrine measures and the stress level, and to investigate if the associations differ by ART status. Results The levels of endocrine measures among Chinese PLWH were consistent with existing studies among PLWH. Generally, this pilot study confirmed the association between endocrine measures and chronic stress. The high stress group showed higher level hair cortisol and lower DHEA to cortisol ratio. The higher stress group also reported higher scores of stressful life events, perceived stress, anxiety and depression. Hair cortisol level was positively related to anxiety; DHEA was negatively associated with stressful life events; and the DHEA to cortisol ratio was positively related to stressful life events and perceived stress. ART did not affect the associations between the endocrine measures and stress level. Conclusions Our findings suggest that hair cortisol and DHEA to cortisol ratio can be used as

  8. Onto-clust--a methodology for combining clustering analysis and ontological methods for identifying groups of comorbidities for developmental disorders.

    PubMed

    Peleg, Mor; Asbeh, Nuaman; Kuflik, Tsvi; Schertz, Mitchell

    2009-02-01

    Children with developmental disorders usually exhibit multiple developmental problems (comorbidities). Hence, such diagnosis needs to revolve on developmental disorder groups. Our objective is to systematically identify developmental disorder groups and represent them in an ontology. We developed a methodology that combines two methods (1) a literature-based ontology that we created, which represents developmental disorders and potential developmental disorder groups, and (2) clustering for detecting comorbid developmental disorders in patient data. The ontology is used to interpret and improve clustering results and the clustering results are used to validate the ontology and suggest directions for its development. We evaluated our methodology by applying it to data of 1175 patients from a child development clinic. We demonstrated that the ontology improves clustering results, bringing them closer to an expert generated gold-standard. We have shown that our methodology successfully combines an ontology with a clustering method to support systematic identification and representation of developmental disorder groups.

  9. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    ERIC Educational Resources Information Center

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  10. Validation of Gujarati Version of ABILOCO-Kids Questionnaire.

    PubMed

    Diwan, Shraddha; Diwan, Jasmin; Patel, Pankaj; Bansal, Ankita B

    2015-10-01

    ABILOCO-Kids is a measure of locomotion ability for children with cerebral palsy (CP) aged 6 to 15 years & is available in English & French. To validate the Gujarati version of ABILOCO-Kids questionnaire to be used in clinical research on Gujarati population. ABILOCO-Kids questionnaire was translated into Gujarati from English using forward-backward-forward method. To ensure face & content validity of Gujarati version using group consensus method, each item was examined by group of experts having mean experience of 24.62 years in field of paediatric and paediatric physiotherapy. Each item was analysed for content, meaning, wording, format, ease of administration & scoring. Each item was scored by expert group as either accepted, rejected or accepted with modification. Procedure was continued until 80% of consensus for all items. Concurrent validity was examined on 55 children with Cerebral Palsy (6-15 years) of all Gross Motor Functional Classification System (GMFCS) level & all clinical types by correlating score of ABILOCO-Kids with Gross Motor Functional Measure & GMFCS. In phase 1 of validation, 16 items were accepted as it is; 22 items accepted with modification & 3 items went for phase 2 validation. For concurrent validity, highly significant positive correlation was found between score of ABILOCO-Kids & total GMFM (r=0.713, p<0.005) & highly significant negative correlation with GMFCS (r= -0.778, p<0.005). Gujarati translated version of ABILOCO-Kids questionnaire has good face & content validity as well as concurrent validity which can be used to measure caregiver reported locomotion ability in children with CP.

  11. Assessment of performance validity in the Stroop Color and Word Test in mild traumatic brain injury patients: a criterion-groups validation design.

    PubMed

    Guise, Brian J; Thompson, Matthew D; Greve, Kevin W; Bianchini, Kevin J; West, Laura

    2014-03-01

    The current study assessed performance validity on the Stroop Color and Word Test (Stroop) in mild traumatic brain injury (TBI) using criterion-groups validation. The sample consisted of 77 patients with a reported history of mild TBI. Data from 42 moderate-severe TBI and 75 non-head-injured patients with other clinical diagnoses were also examined. TBI patients were categorized on the basis of Slick, Sherman, and Iverson (1999) criteria for malingered neurocognitive dysfunction (MND). Classification accuracy is reported for three indicators (Word, Color, and Color-Word residual raw scores) from the Stroop across a range of injury severities. With false-positive rates set at approximately 5%, sensitivity was as high as 29%. The clinical implications of these findings are discussed. © 2012 The British Psychological Society.

  12. Best Practices in Stability Indicating Method Development and Validation for Non-clinical Dose Formulations.

    PubMed

    Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W

    2016-11-01

    Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.

  13. RELIABILITY AND VALIDITY OF A BIOMECHANICALLY BASED ANALYSIS METHOD FOR THE TENNIS SERVE

    PubMed Central

    Kibler, W. Ben; Lamborn, Leah; Smith, Belinda J.; English, Tony; Jacobs, Cale; Uhl, Tim L.

    2017-01-01

    Background An observational tennis serve analysis (OTSA) tool was developed using previously established body positions from three-dimensional kinematic motion analysis studies. These positions, defined as nodes, have been associated with efficient force production and minimal joint loading. However, the tool has yet to be examined scientifically. Purpose The primary purpose of this investigation was to determine the inter-observer reliability for each node between two health care professionals (HCPs) that developed the OTSA, and secondarily to investigate the validity of the OTSA. Methods Two separate studies were performed to meet these objectives. An inter-observer reliability study preceded the validity study by examining 28 videos of players serving. Two HCPs graded each video and scored the presence or absence of obtaining each node. Discriminant validity was determined in 33 tennis players using video taped records of three first serves. Serve mechanics were graded using the OSTA and categorized players into those with good ( ≥ 5) and poor ( ≤ 4) mechanics. Participants performed a series of field tests to evaluate trunk flexibility, lower extremity and trunk power, and dynamic balance. Results The group with good mechanics demonstrated greater backward trunk flexibility (p=0.02), greater rotational power (p=0.02), and higher single leg countermovement jump (p=0.05). Reliability of the OTSA ranged from K = 0.36-1.0, with the majority of all the nodes displaying substantial reliability (K>0.61). Conclusion This study provides HCPs with a valid and reliable field tool used to assess serve mechanics. Physical characteristics of trunk mobility and power appear to discriminate serve mechanics between players. Future intervention studies are needed to determine if improvement in physical function contribute to improved serve mechanics. Level of Evidence 3 PMID:28593098

  14. Data mining and computationally intensive methods: summary of Group 7 contributions to Genetic Analysis Workshop 13.

    PubMed

    Costello, Tracy J; Falk, Catherine T; Ye, Kenny Q

    2003-01-01

    The Framingham Heart Study data, as well as a related simulated data set, were generously provided to the participants of the Genetic Analysis Workshop 13 in order that newly developed and emerging statistical methodologies could be tested on that well-characterized data set. The impetus driving the development of novel methods is to elucidate the contributions of genes, environment, and interactions between and among them, as well as to allow comparison between and validation of methods. The seven papers that comprise this group used data-mining methodologies (tree-based methods, neural networks, discriminant analysis, and Bayesian variable selection) in an attempt to identify the underlying genetics of cardiovascular disease and related traits in the presence of environmental and genetic covariates. Data-mining strategies are gaining popularity because they are extremely flexible and may have greater efficiency and potential in identifying the factors involved in complex disorders. While the methods grouped together here constitute a diverse collection, some papers asked similar questions with very different methods, while others used the same underlying methodology to ask very different questions. This paper briefly describes the data-mining methodologies applied to the Genetic Analysis Workshop 13 data sets and the results of those investigations. Copyright 2003 Wiley-Liss, Inc.

  15. Group Contribution Methods for Phase Equilibrium Calculations.

    PubMed

    Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian

    2015-01-01

    The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.

  16. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    PubMed

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  19. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  20. Efficient methods for overlapping group lasso.

    PubMed

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  1. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  2. Persistent threats to validity in single-group interrupted time series analysis with a cross over design.

    PubMed

    Linden, Ariel

    2017-04-01

    The basic single-group interrupted time series analysis (ITSA) design has been shown to be susceptible to the most common threat to validity-history-the possibility that some other event caused the observed effect in the time series. A single-group ITSA with a crossover design (in which the intervention is introduced and withdrawn 1 or more times) should be more robust. In this paper, we describe and empirically assess the susceptibility of this design to bias from history. Time series data from 2 natural experiments (the effect of multiple repeals and reinstatements of Louisiana's motorcycle helmet law on motorcycle fatalities and the association between the implementation and withdrawal of Gorbachev's antialcohol campaign with Russia's mortality crisis) are used to illustrate that history remains a threat to ITSA validity, even in a crossover design. Both empirical examples reveal that the single-group ITSA with a crossover design may be biased because of history. In the case of motorcycle fatalities, helmet laws appeared effective in reducing mortality (while repealing the law increased mortality), but when a control group was added, it was shown that this trend was similar in both groups. In the case of Gorbachev's antialcohol campaign, only when contrasting the results against those of a control group was the withdrawal of the campaign found to be the more likely culprit in explaining the Russian mortality crisis than the collapse of the Soviet Union. Even with a robust crossover design, single-group ITSA models remain susceptible to bias from history. Therefore, a comparable control group design should be included, whenever possible. © 2016 John Wiley & Sons, Ltd.

  3. A validated method for modeling anthropoid hip abduction in silico.

    PubMed

    Hammond, Ashley S; Plavcan, J Michael; Ward, Carol V

    2016-07-01

    The ability to reconstruct hip joint mobility from femora and pelves could provide insight into the locomotion and paleobiology of fossil primates. This study presents a method for modeling hip abduction in anthropoids validated with in vivo data. Hip abduction simulations were performed on a large sample of anthropoids. The modeling approach integrates three-dimensional (3D) polygonal models created from laser surface scans of bones, 3D landmark data, and shape analysis software to digitally articulate and manipulate the hip joint. Range of femoral abduction (degrees) and the abducted knee position (distance spanned at the knee during abduction) were compared with published live animal data. The models accurately estimate knee position and (to a lesser extent) angular abduction across broad locomotor groups. They tend to underestimate abduction for acrobatic or suspensory taxa, but overestimate it in more stereotyped taxa. Correspondence between in vivo and in silico data varies at the specific and generic level. Our models broadly correspond to in vivo data on hip abduction, although the relationship between the models and live animal data is less straightforward than hypothesized. The models can predict acrobatic or stereotyped locomotor adaptation for taxa with values near the extremes of the range of abduction ability. Our findings underscore the difficulties associated with modeling complex systems and the importance of validating in silico models. They suggest that models of joint mobility can offer additional insight into the functional abilities of extinct primates when done in consideration of how joints move and function in vivo. Am J Phys Anthropol 160:529-548, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. The development of 'Quality of Life Instrument for Indian Diabetes patients (QOLID): a validation and reliability study in middle and higher income groups.

    PubMed

    Nagpal, Jitender; Kumar, Arvind; Kakar, Sonia; Bhartia, Abhishek

    2010-05-01

    To develop a reliable and valid quality of life questionnaire for Indian patients with diabetes. A draft of 75 questions was prepared on the basis of expert opinion, focus group discussions, review of existing literature and detailed semi-structured interviews of patients with diabetes with the intention of including all aspects of diabetes-specific and quality of life considered relevant by patients and care providers to enable constrict validity. A Stage 2 questionnaire was then prepared with 13 domains and 54 items (questions) after expert panel review for obvious irrelevance and duplication of issues. It was administered to 150 participants visiting a diabetes center at New Delhi. Factor analysis was done using principal component method with varimax rotation. Reliability analysis was done by calculating Cronbach's Alpha. For evaluating concordant validity the questionnaire was co-administered with DQL-CTQ to 30 participants. The discriminant validity of the questionnaire was tested using 't' test for metabolic control, co-morbidities, insulin use and gender. Using principal component method 8 domains were identified on the basis of an apriori hypothesis and the scree plot. These 8 domains explained 49.9% of the total variation. 34 items (questions) were selected to represent these domains on the basis of extraction communality, factor loading, inter-item and item-total correlations. The final questionnaire has an Overall Cronbach's Alpha value of 0.894 (subscale- 0.55 to 0.85) showing high internal consistency. The questionnaire showed good concordance (product moment correlation 0.724; p = 0.001; subscale correlation - 0.457 to 0.779) with the DQL-CTQ. The overall standardized questionnaire score showed good responsiveness to metabolic control and co-morbidities establishing discriminant validity. The final version of questionnaire with 8 domains and 34 items is a reliable and valid tool for assessment of quality of life of Indian patients with diabetes.

  5. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    PubMed

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  6. Introduction to Validation of Analytical Methods: Potentiometric Determination of CO[subscript 2

    ERIC Educational Resources Information Center

    Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario

    2017-01-01

    Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…

  7. Development and validation of a discriminative dissolution method for atorvastatin calcium tablets using in vivo data by LC and UV methods.

    PubMed

    Machado, J C; Lange, A D; Todeschini, V; Volpato, N M

    2014-02-01

    A dissolution method to analyze atorvastatin tablets using in vivo data for RP and test pilot (PB) was developed and validated. The appropriate conditions were determined after solubility tests using different media, and sink conditions were established. The conditions used were equipment paddle at 50 rpm and 900 mL of potassium phosphate buffer pH 6.0 as dissolution medium. In vivo release profiles were obtained from the bioequivalence study of RP and the generic candidate PB. The fraction of dose absorbed was calculated using the Loo-Riegelman method. It was necessary to use a scale factor of time similar to 6.0, to associate the values of absorbed fraction and dissolved fraction, obtaining an in vivo-in vitro correlation level A. The dissolution method to quantify the amount of drug dissolved was validated using high-performance liquid chromatography and ultraviolet spectrophotometry, and validated according to the USP protocol. The discriminative power of dissolution conditions was assessed using two different pilot batches of atorvastatin tablets (PA and PB) and RP. The dissolution test was validated and may be used as a discriminating method in quality control and in the development of the new formulations.

  8. Clinical Validation of Heart Rate Apps: Mixed-Methods Evaluation Study

    PubMed Central

    Stans, Jelle; Mortelmans, Christophe; Van Haelst, Ruth; Van Schelvergem, Gertjan; Pelckmans, Caroline; Smeets, Christophe JP; Lanssens, Dorien; De Cannière, Hélène; Storms, Valerie; Thijs, Inge M; Vaes, Bert; Vandervoort, Pieter M

    2017-01-01

    Background Photoplethysmography (PPG) is a proven way to measure heart rate (HR). This technology is already available in smartphones, which allows measuring HR only by using the smartphone. Given the widespread availability of smartphones, this creates a scalable way to enable mobile HR monitoring. An essential precondition is that these technologies are as reliable and accurate as the current clinical (gold) standards. At this moment, there is no consensus on a gold standard method for the validation of HR apps. This results in different validation processes that do not always reflect the veracious outcome of comparison. Objective The aim of this paper was to investigate and describe the necessary elements in validating and comparing HR apps versus standard technology. Methods The FibriCheck (Qompium) app was used in two separate prospective nonrandomized studies. In the first study, the HR of the FibriCheck app was consecutively compared with 2 different Food and Drug Administration (FDA)-cleared HR devices: the Nonin oximeter and the AliveCor Mobile ECG. In the second study, a next step in validation was performed by comparing the beat-to-beat intervals of the FibriCheck app to a synchronized ECG recording. Results In the first study, the HR (BPM, beats per minute) of 88 random subjects consecutively measured with the 3 devices showed a correlation coefficient of .834 between FibriCheck and Nonin, .88 between FibriCheck and AliveCor, and .897 between Nonin and AliveCor. A single way analysis of variance (ANOVA; P=.61 was executed to test the hypothesis that there were no significant differences between the HRs as measured by the 3 devices. In the second study, 20,298 (ms) R-R intervals (RRI)–peak-to-peak intervals (PPI) from 229 subjects were analyzed. This resulted in a positive correlation (rs=.993, root mean square deviation [RMSE]=23.04 ms, and normalized root mean square error [NRMSE]=0.012) between the PPI from FibriCheck and the RRI from the wearable

  9. Analytical method development and validation of spectrofluorimetric and spectrophotometric determination of some antimicrobial drugs in their pharmaceuticals.

    PubMed

    Ibrahim, F; Wahba, M E K; Magdy, G

    2018-01-05

    In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315nm and 403nm with linearity ranges of 5.0-140.0 and 10.0-100.0μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λ em 488nm or 491nm after excitation at λ ex 410nm with linearity ranges of 0.20-10.0 and 0.20-36.0μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λ em 335nm following excitation at λ ex 255nm with linearity range of 0.05 to 1.6μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Analytical method development and validation of spectrofluorimetric and spectrophotometric determination of some antimicrobial drugs in their pharmaceuticals

    NASA Astrophysics Data System (ADS)

    Ibrahim, F.; Wahba, M. E. K.; Magdy, G.

    2018-01-01

    In this study, three novel, sensitive, simple and validated spectrophotometric and spectrofluorimetric methods have been proposed for estimation of some important antimicrobial drugs. The first two methods have been proposed for estimation of two important third-generation cephalosporin antibiotics namely, cefixime and cefdinir. Both methods were based on condensation of the primary amino group of the studied drugs with acetyl acetone and formaldehyde in acidic medium. The resulting products were measured by spectrophotometric (Method I) and spectrofluorimetric (Method II) tools. Regarding method I, the absorbance was measured at 315 nm and 403 nm with linearity ranges of 5.0-140.0 and 10.0-100.0 μg/mL for cefixime and cefdinir, respectively. Meanwhile in method II, the produced fluorophore was measured at λem 488 nm or 491 nm after excitation at λex 410 nm with linearity ranges of 0.20-10.0 and 0.20-36.0 μg/mL for cefixime and cefdinir, respectively. On the other hand, method III was devoted to estimate nifuroxazide spectrofluorimetrically depending on formation of highly fluorescent product upon reduction of the studied drug with Zinc powder in acidic medium. Measurement of the fluorescent product was carried out at λem 335 nm following excitation at λex 255 nm with linearity range of 0.05 to 1.6 μg/mL. The developed methods were subjected to detailed validation procedure, moreover they were used for the estimation of the concerned drugs in their pharmaceuticals. It was found that there is a good agreement between the obtained results and those obtained by the reported methods.

  11. Prognostic value of the new Grade Groups in Prostate Cancer: a multi-institutional European validation study.

    PubMed

    Mathieu, R; Moschini, M; Beyer, B; Gust, K M; Seisen, T; Briganti, A; Karakiewicz, P; Seitz, C; Salomon, L; de la Taille, A; Rouprêt, M; Graefen, M; Shariat, S F

    2017-06-01

    We aimed to assess the prognostic relevance of the new Grade Groups in Prostate Cancer (PCa) within a large cohort of European men treated with radical prostatectomy (RP). Data from 27 122 patients treated with RP at seven European centers were analyzed. We investigated the prognostic performance of the new Grade Groups (based on Gleason score 3+3, 3+4, 4+3, 8 and 9-10) on biopsy and RP specimen, adjusted for established clinical and pathological characteristics. Multivariable Cox proportional hazards regression models assessed the association of new Grade Groups with biochemical recurrence (BCR). Prognostic accuracies of the models were assessed using Harrell's C-index. Median follow-up was 29 months (interquartile range, 13-54). The 4-year estimated BCR-free survival (bRFS) for biopsy Grade Groups 1-5 were 91.3, 81.6, 69.8, 60.3 and 44.4%, respectively. The 4-year estimated bRFS for RP Grade Groups 1-5 were 96.1%, 86.7%, 67.0%, 63.1% and 41.0%, respectively. Compared with Grade Group 1, all other Grade Groups based both on biopsy and RP specimen were independently associated with a lower bRFS (all P<0.01). Adjusted pairwise comparisons revealed statistically differences between all Grade Groups, except for group 3 and 4 on RP specimen (P=0.10). The discriminations of the multivariable base prognostic models based on the current three-tier and the new five-tier systems were not clinically different (0.3 and 0.9% increase in discrimination for clinical and pathological model). We validated the independent prognostic value of the new Grade Groups on biopsy and RP specimen from European PCa men. However, it does not improve the accuracies of prognostic models by a clinically significant margin. Nevertheless, this new classification may help physicians and patients estimate disease aggressiveness with a user-friendly, clinically relevant and reproducible method.

  12. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  13. Compatible validated spectrofluorimetric and spectrophotometric methods for determination of vildagliptin and saxagliptin by factorial design experiments.

    PubMed

    Abdel-Aziz, Omar; Ayad, Miriam F; Tadros, Mariam M

    2015-04-05

    Simple, selective and reproducible spectrofluorimetric and spectrophotometric methods have been developed for the determination of vildagliptin and saxagliptin in bulk and their pharmaceutical dosage forms. The first proposed spectrofluorimetric method is based on the dansylation reaction of the amino group of vildagliptin with dansyl chloride to form a highly fluorescent product. The formed product was measured spectrofluorimetrically at 455 nm after excitation at 345 nm. Beer's law was obeyed in a concentration range of 100-600 μg ml(-1). The second proposed spectrophotometric method is based on the charge transfer complex of saxagliptin with tetrachloro-1,4-benzoquinone (p-chloranil). The formed charge transfer complex was measured spectrophotometrically at 530 nm. Beer's law was obeyed in a concentration range of 100-850 μg ml(-1). The third proposed spectrophotometric method is based on the condensation reaction of the primary amino group of saxagliptin with formaldehyde and acetyl acetone to form a yellow colored product known as Hantzsch reaction, measured at 342.5 nm. Beer's law was obeyed in a concentration range of 50-300 μg ml(-1). All the variables were studied to optimize the reactions' conditions using factorial design. The developed methods were validated and proved to be specific and accurate for quality control of vildagliptin and saxagliptin in their pharmaceutical dosage forms. Copyright © 2015. Published by Elsevier B.V.

  14. Compatible validated spectrofluorimetric and spectrophotometric methods for determination of vildagliptin and saxagliptin by factorial design experiments

    NASA Astrophysics Data System (ADS)

    Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.

    2015-04-01

    Simple, selective and reproducible spectrofluorimetric and spectrophotometric methods have been developed for the determination of vildagliptin and saxagliptin in bulk and their pharmaceutical dosage forms. The first proposed spectrofluorimetric method is based on the dansylation reaction of the amino group of vildagliptin with dansyl chloride to form a highly fluorescent product. The formed product was measured spectrofluorimetrically at 455 nm after excitation at 345 nm. Beer's law was obeyed in a concentration range of 100-600 μg ml-1. The second proposed spectrophotometric method is based on the charge transfer complex of saxagliptin with tetrachloro-1,4-benzoquinone (p-chloranil). The formed charge transfer complex was measured spectrophotometrically at 530 nm. Beer's law was obeyed in a concentration range of 100-850 μg ml-1. The third proposed spectrophotometric method is based on the condensation reaction of the primary amino group of saxagliptin with formaldehyde and acetyl acetone to form a yellow colored product known as Hantzsch reaction, measured at 342.5 nm. Beer's law was obeyed in a concentration range of 50-300 μg ml-1. All the variables were studied to optimize the reactions' conditions using factorial design. The developed methods were validated and proved to be specific and accurate for quality control of vildagliptin and saxagliptin in their pharmaceutical dosage forms.

  15. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  16. System and method for secure group transactions

    DOEpatents

    Goldsmith, Steven Y [Rochester, MN

    2006-04-25

    A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.

  17. Right ventriculography as a valid method for the diagnosis of tricuspid insufficiency.

    PubMed

    Ubago, J L; Figueroa, A; Colman, T; Ochoteco, A; Rodríguez, M; Durán, C M

    1981-01-01

    The value of right ventriculography in the diagnosis of tricuspid insufficiency (TI) is often questioned because of 1) the high incidence of premature ventricular contractions (PVCs) during injections and 2) interference of the catheter in the valve closure mechanism. In 168 patients a commercially available, not preshaped, balloon-tipped catheter was used for right ventriculography. To avoid the induction of PVCs, the catheter tip was placed in the middle third of the diafragmatic wall of the right ventricle, and the balloon was inflated, becoming trapped by the trabeculae. In this position the catheter's side holes should be located in the inflow chamber. To ensure this correct position, and therefore lack of ectopic beats during angiography, a saline test injection was performed previously in every case. With this technique the incidence of PVCs during ventriculography was only 7.7%. In all but one case, such beats were isolated. The 168 patients were divided into three groups according to their likelihood of experiencing tricuspid interference by the catheter: group 1 included 41 patients with a normal heart or with coronary artery disease. No one from this group had TI. Of group II, 28 patients with right ventricular pressure or volume overload or cardiomyopathy, only 2 had TI, both with a previous clinical diagnosis of regurgitation. Group III contained 99 patients with rheumatic heart disease. Thirty-five of them showed angiographic TI, and 24 of these had this diagnosis confirmed either clinically or at surgery. It is felt that this technique of right ventriculography, with its low incidence of PVCs and slight interference with tricuspid closure, is a valid method for the objective study of the tricuspid valve.

  18. Validity and Feasibility of a Digital Diet Estimation Method for Use with Preschool Children: A Pilot Study

    ERIC Educational Resources Information Center

    Nicklas, Theresa A.; O'Neil, Carol E.; Stuff, Janice; Goodell, Lora Suzanne; Liu, Yan; Martin, Corby K.

    2012-01-01

    Objective: The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in "Head Start." Methods: Preschool children and their caregivers participated in validation (n = 22) and feasibility (n = 24) pilot studies. Validity was determined in the metabolic research unit using…

  19. PRN 96-1: Tolerance Enforcement Methods - Independent Laboratory Validation by Petitioner

    EPA Pesticide Factsheets

    This notice is intended to clarify the requirements for submission of an Independent Laboratory Validation to accompany new pesticide analytical methods and does not contain additional data requirements.This notice supersedes PR Notice 88-5.

  20. Optimized and Validated Spectrophotometric Methods for the Determination of Enalapril Maleate in Commercial Dosage Forms

    PubMed Central

    Rahman, Nafisur; Haque, Sk Manirul

    2008-01-01

    Four simple, rapid and sensitive spectrophotometric methods have been proposed for the determination of enalapril maleate in pharmaceutical formulations. The first method is based on the reaction of carboxylic acid group of enalapril maleate with a mixture of potassium iodate (KIO3) and iodide (KI) to form yellow colored product in aqueous medium at 25 ± 1°C. The reaction is followed spectrophotometrically by measuring the absorbance at 352 nm. The second, third and fourth methods are based on the charge transfer complexation reaction of the drug with p-chloranilic acid (pCA) in 1, 4-dioxan-methanol medium, 2, 3-dichloro 5, 6-dicyano 1, 4-benzoquinone (DDQ) in acetonitrile-1,4 dioxane medium and iodine in acetonitrile-dichloromethane medium. Under optimized experimental conditions, Beer’s law is obeyed in the concentration ranges of 2.5–50, 20–560, 5–75 and 10–200 μg mL−1, respectively. All the methods have been applied to the determination of enalapril maleate in pharmaceutical dosage forms. Results of analysis are validated statistically. PMID:19609388

  1. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  2. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  3. A promising method for identifying cross-cultural differences in patient perspective: the use of Internet-based focus groups for content validation of new Patient Reported Outcome assessments

    PubMed Central

    Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu

    2006-01-01

    Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology

  4. Pinaverium Bromide: Development and Validation of Spectrophotometric Methods for Assay and Dissolution Studies.

    PubMed

    Martins, Danielly da Fonte Carvalho; Florindo, Lorena Coimbra; Machado, Anna Karolina Mouzer da Silva; Todeschini, Vítor; Sangoi, Maximiliano da Silva

    2017-11-01

    This study presents the development and validation of UV spectrophotometric methods for the determination of pinaverium bromide (PB) in tablet assay and dissolution studies. The methods were satisfactorily validated according to International Conference on Harmonization guidelines. The response was linear (r2 > 0.99) in the concentration ranges of 2-14 μg/mL at 213 nm and 10-70 μg/mL at 243 nm. The LOD and LOQ were 0.39 and 1.31 μg/mL, respectively, at 213 nm. For the 243 nm method, the LOD and LOQ were 2.93 and 9.77 μg/mL, respectively. Precision was evaluated by RSD, and the obtained results were lower than 2%. Adequate accuracy was also obtained. The methods proved to be robust using a full factorial design evaluation. For PB dissolution studies, the best conditions were achieved using a United States Pharmacopeia Dissolution Apparatus 2 (paddle) at 50 rpm and with 900 mL 0.1 M hydrochloric acid as the dissolution medium, presenting satisfactory results during the validation tests. In addition, the kinetic parameters of drug release were investigated using model-dependent methods, and the dissolution profiles were best described by the first-order model. Therefore, the proposed methods were successfully applied for the assay and dissolution analysis of PB in commercial tablets.

  5. Pressure ulcer prevention algorithm content validation: a mixed-methods, quantitative study.

    PubMed

    van Rijswijk, Lia; Beitz, Janice M

    2015-04-01

    Translating pressure ulcer prevention (PUP) evidence-based recommendations into practice remains challenging for a variety of reasons, including the perceived quality, validity, and usability of the research or the guideline itself. Following the development and face validation testing of an evidence-based PUP algorithm, additional stakeholder input and testing were needed. Using convenience sampling methods, wound care experts attending a national wound care conference and a regional wound ostomy continence nursing (WOCN) conference and/or graduates of a WOCN program were invited to participate in an Internal Review Board-approved, mixed-methods quantitative survey with qualitative components to examine algorithm content validity. After participants provided written informed consent, demographic variables were collected and participants were asked to comment on and rate the relevance and appropriateness of each of the 26 algorithm decision points/steps using standard content validation study procedures. All responses were anonymous. Descriptive summary statistics, mean relevance/appropriateness scores, and the content validity index (CVI) were calculated. Qualitative comments were transcribed and thematically analyzed. Of the 553 wound care experts invited, 79 (average age 52.9 years, SD 10.1; range 23-73) consented to participate and completed the study (a response rate of 14%). Most (67, 85%) were female, registered (49, 62%) or advanced practice (12, 15%) nurses, and had > 10 years of health care experience (88, 92%). Other health disciplines included medical doctors, physical therapists, nurse practitioners, and certified nurse specialists. Almost all had received formal wound care education (75, 95%). On a Likert-type scale of 1 (not relevant/appropriate) to 4 (very relevant and appropriate), the average score for the entire algorithm/all decision points (N = 1,912) was 3.72 with an overall CVI of 0.94 (out of 1). The only decision point/step recommendation

  6. Estimating Population Cause-Specific Mortality Fractions from in-Hospital Mortality: Validation of a New Method

    PubMed Central

    Murray, Christopher J. L; Lopez, Alan D; Barofsky, Jeremy T; Bryson-Cahn, Chloe; Lozano, Rafael

    2007-01-01

    Background Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs) using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records. Methods and Findings For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR) data from Mexico 1998–2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico) and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23

  7. The Individual Regulation Component of Group Emotional Intelligence: Measure Development and Validation

    ERIC Educational Resources Information Center

    Peterson, Christina Hamme

    2012-01-01

    Counseling work is increasingly conducted in team format. The methods counseling teams use to manage the emotional component of their group life, or their group emotional intelligence, have been proposed as significantly contributing to group member trust, cooperation, and ultimate performance. Item development, exploratory factor analysis, and…

  8. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A.

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less

  9. Use of multiple cluster analysis methods to explore the validity of a community outcomes concept map.

    PubMed

    Orsi, Rebecca

    2017-02-01

    Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Validating justifications in preschool girls' and boys' friendship group talk: implications for linguistic and socio-cognitive development.

    PubMed

    Kyratzis, Amy; Ross, Tamara Shuqum; Koymen, S Bahar

    2010-01-01

    Children are believed to construct their causal theories through talk and interaction, but with the exception of a few studies, little or nothing is known about how young children justify and build theories of the world together with same-age peers through naturally occurring interaction, Children's sensitivity to when a pair or group of interlocutors who interact frequently together feel that a justification is needed, is an index of developing pragmatic competence (Goetz & Shatz, 1999) and may be influenced by interactive goals and gender identity positioning. Studies suggest that salient contexts for justifications for young children are disagreement and control (e.g. Veneziano & Sinclair, 1995) but researchers have been less recognizant of 'situations in which partners verbally assist in the construction of justifications as a means to maintain contact or create solidarity' (Goetz & Shatz, 1999: 722) as contexts for justifications. The present study examined the spontaneously produced justification constructions in the naturally occurring free play of five friendship groups of preschool-aged children (aged from 3 ; 6 to 5 ; 4), in terms of the motivating context of the justification, marking of the causal relationship with a connective, and causal theories accessed in the talk. Partner expansion (validating justifications) was a salient motivating context for justifications, especially in the talk of friendship groups of girls, and seemed to privilege greater marking of the causal relationship with a connective and less arbitrary reasoning. One group of girls varied their use of validating justifications depending on the theme of play. Results are discussed in terms of the implications of use of validating justifications for children's causal theory building with peers, linguistic development, and pragmatic development.

  11. The establishment of tocopherol reference intervals for Hungarian adult population using a validated HPLC method.

    PubMed

    Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes

    2017-09-01

    Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios. Copyright © 2017 John Wiley & Sons, Ltd.

  12. A Group Contribution Method for Estimating Cetane and Octane Numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubic, William Louis

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contributionmore » methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.« less

  13. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  14. Development and validation of a method for measuring depth of understanding in constructivist learning

    NASA Astrophysics Data System (ADS)

    Guarino, Lucia Falsetti

    A method for measuring depth of understanding of students in the middle-level science classroom was developed and validated. A common theme in the literature on constructivism in science education is that constructivist pedagogy, as opposed to objectivist pedagogy, results in a greater depth of understanding. Since few instruments measuring this construct exist at the present time, the development of such a tool to measure this construct was a significant contribution to the current body of assessment technologies in science education. The author's Depth of Understanding Assessment (DUA) evolved from a writing measure originally designed as a history assessment. The study involved 230 eighth grade science students studying a chemical change unit. The main research questions were: (1) What is the relationship between the DUA and each of the following independent variables: recall, application, and questioning modalities as measured by the Cognitive Preference Test; deep, surface, achieving, and deep-achieving approaches as measured by the Learning Process Questionnaire; achievement as measured by the Chemical Change Quiz, and teacher perception of student ability to conceptualize science content? (2) Is there a difference in depth of understanding, as measured by the DUA, between students who are taught by objectivist pedagogy and students who are taught by constructivist pedagogy favoring the constructivist group? (3) Is there a gender difference in depth of understanding as measured by the DUA? (4) Do students who are taught by constructivist pedagogy perceive their learning environment as more constructivist than students who are taught by objectivist pedagogy? Six out of nine hypothesis tests supported the validity of the DUA. The results of the qualitative component of this study which consisted of student interviews substantiated the quantitative results by providing additional information and insights. There was a significant difference in depth of

  15. Development and Testing of a Method for Validating Chemical Inactivation of Ebola Virus.

    PubMed

    Alfson, Kendra J; Griffiths, Anthony

    2018-03-13

    Complete inactivation of infectious Ebola virus (EBOV) is required before a sample may be removed from a Biosafety Level 4 laboratory. The United States Federal Select Agent Program regulations require that procedures used to demonstrate chemical inactivation must be validated in-house to confirm complete inactivation. The objective of this study was to develop a method for validating chemical inactivation of EBOV and then demonstrate the effectiveness of several commonly-used inactivation methods. Samples containing infectious EBOV ( Zaire ebolavirus ) in different matrices were treated, and the sample was diluted to limit the cytopathic effect of the inactivant. The presence of infectious virus was determined by assessing the cytopathic effect in Vero E6 cells. Crucially, this method did not result in a loss of infectivity in control samples, and we were able to detect less than five infectious units of EBOV ( Zaire ebolavirus ). We found that TRIzol LS reagent and RNA-Bee inactivated EBOV in serum; TRIzol LS reagent inactivated EBOV in clarified cell culture media; TRIzol reagent inactivated EBOV in tissue and infected Vero E6 cells; 10% neutral buffered formalin inactivated EBOV in tissue; and osmium tetroxide vapors inactivated EBOV on transmission electron microscopy grids. The methods described herein are easily performed and can be adapted to validate inactivation of viruses in various matrices and by various chemical methods.

  16. British isles lupus assessment group 2004 index is valid for assessment of disease activity in systemic lupus erythematosus

    PubMed Central

    Yee, Chee-Seng; Farewell, Vernon; Isenberg, David A; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Gordon, Caroline

    2007-01-01

    Objective To determine the construct and criterion validity of the British Isles Lupus Assessment Group 2004 (BILAG-2004) index for assessing disease activity in systemic lupus erythematosus (SLE). Methods Patients with SLE were recruited into a multicenter cross-sectional study. Data on SLE disease activity (scores on the BILAG-2004 index, Classic BILAG index, and Systemic Lupus Erythematosus Disease Activity Index 2000 [SLEDAI-2K]), investigations, and therapy were collected. Overall BILAG-2004 and overall Classic BILAG scores were determined by the highest score achieved in any of the individual systems in the respective index. Erythrocyte sedimentation rates (ESRs), C3 levels, C4 levels, anti–double-stranded DNA (anti-dsDNA) levels, and SLEDAI-2K scores were used in the analysis of construct validity, and increase in therapy was used as the criterion for active disease in the analysis of criterion validity. Statistical analyses were performed using ordinal logistic regression for construct validity and logistic regression for criterion validity. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Results Of the 369 patients with SLE, 92.7% were women, 59.9% were white, 18.4% were Afro-Caribbean and 18.4% were South Asian. Their mean ± SD age was 41.6 ± 13.2 years and mean disease duration was 8.8 ± 7.7 years. More than 1 assessment was obtained on 88.6% of the patients, and a total of 1,510 assessments were obtained. Increasing overall scores on the BILAG-2004 index were associated with increasing ESRs, decreasing C3 levels, decreasing C4 levels, elevated anti-dsDNA levels, and increasing SLEDAI-2K scores (all P < 0.01). Increase in therapy was observed more frequently in patients with overall BILAG-2004 scores reflecting higher disease activity. Scores indicating active disease (overall BILAG-2004 scores of A and B) were significantly associated with increase in therapy (odds ratio [OR] 19.3, P

  17. Development and Validation of High Performance Liquid Chromatography Method for Determination Atorvastatin in Tablet

    NASA Astrophysics Data System (ADS)

    Yugatama, A.; Rohmani, S.; Dewangga, A.

    2018-03-01

    Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.

  18. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  19. Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method.

    PubMed

    Power, Emma; Thomas, Emma; Worrall, Linda; Rose, Miranda; Togher, Leanne; Nickels, Lyndsey; Hersh, Deborah; Godecke, Erin; O'Halloran, Robyn; Lamont, Sue; O'Connor, Claire; Clarke, Kim

    2015-07-02

    To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation. Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM). A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers. Statements that achieved a high level of agreement and an overall median score of 7-9 on a nine-point scale were rated as 'appropriate'. 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements. This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation. Published by the BMJ Publishing Group Limited. For permission to use (where not already

  20. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  1. Selective testing strategies for diagnosing group A streptococcal infection in children with pharyngitis: a systematic review and prospective multicentre external validation study

    PubMed Central

    Cohen, Jérémie F.; Cohen, Robert; Levy, Corinne; Thollot, Franck; Benani, Mohamed; Bidet, Philippe; Chalumeau, Martin

    2015-01-01

    Background: Several clinical prediction rules for diagnosing group A streptococcal infection in children with pharyngitis are available. We aimed to compare the diagnostic accuracy of rules-based selective testing strategies in a prospective cohort of children with pharyngitis. Methods: We identified clinical prediction rules through a systematic search of MEDLINE and Embase (1975–2014), which we then validated in a prospective cohort involving French children who presented with pharyngitis during a 1-year period (2010–2011). We diagnosed infection with group A streptococcus using two throat swabs: one obtained for a rapid antigen detection test (StreptAtest, Dectrapharm) and one obtained for culture (reference standard). We validated rules-based selective testing strategies as follows: low risk of group A streptococcal infection, no further testing or antibiotic therapy needed; intermediate risk of infection, rapid antigen detection for all patients and antibiotic therapy for those with a positive test result; and high risk of infection, empiric antibiotic treatment. Results: We identified 8 clinical prediction rules, 6 of which could be prospectively validated. Sensitivity and specificity of rules-based selective testing strategies ranged from 66% (95% confidence interval [CI] 61–72) to 94% (95% CI 92–97) and from 40% (95% CI 35–45) to 88% (95% CI 85–91), respectively. Use of rapid antigen detection testing following the clinical prediction rule ranged from 24% (95% CI 21–27) to 86% (95% CI 84–89). None of the rules-based selective testing strategies achieved our diagnostic accuracy target (sensitivity and specificity > 85%). Interpretation: Rules-based selective testing strategies did not show sufficient diagnostic accuracy in this study population. The relevance of clinical prediction rules for determining which children with pharyngitis should undergo a rapid antigen detection test remains questionable. PMID:25487666

  2. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  3. Reliability and Validity of the New Tanaka B Intelligence Scale Scores: A Group Intelligence Test

    PubMed Central

    Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio

    2014-01-01

    Objective The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. Methods The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2±0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach’s alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. Results The Cronbach’s alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85–0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9–48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03–0.4). Thus, intellectual disability could be ruled out or determined. Conclusion The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult. PMID:24940880

  4. Validity of two methods for estimation of vertical jump height.

    PubMed

    Dias, Jonathan Ache; Dal Pupo, Juliano; Reis, Diogo C; Borges, Lucas; Santos, Saray G; Moro, Antônio R P; Borges, Noé G

    2011-07-01

    The objectives of this study were (a) to determine the concurrent validity of the flight time (FT) and double integration of vertical reaction force (DIF) methods in the estimation of vertical jump height with the video method (VID) as reference; (b) to verify the degree of agreement among the 3 methods; (c) to propose regression equations to predict the jump height using the FT and DIF. Twenty healthy male and female nonathlete college students participated in this study. The experiment involved positioning a contact mat (CTM) on the force platform (FP), with a video camera 3 m from the FP and perpendicular to the sagittal plane of the subject being assessed. Each participant performed 15 countermovement jumps with 60-second intervals between the trials. Significant differences were found between the jump height obtained by VID and the results with FT (p ≤ 0.01) and DIF (p ≤ 0.01), showing that the methods are not valid. Additionally, the DIF showed a greater degree of agreement with the reference method than the FT did, and both presented a systematic error. From the linear regression test was determined the prediction equations with a high degree of linearity between the methods VID vs. DIF (R = 0.988) and VID vs. FT (R = 0.979). Therefore, the prediction equations suggested may allow coaches to measure the vertical jump performance of athletes by the FT and DIF, using a CTM or an FP, which represents more practical and viable approaches in the sports field; comparisons can then be made with the results of other athletes evaluated by VID.

  5. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  6. Validation of a continuous flow method for the determination of soluble iron in atmospheric dust and volcanic ash.

    PubMed

    Simonella, Lucio E; Gaiero, Diego M; Palomeque, Miriam E

    2014-10-01

    Iron is an essential micronutrient for phytoplankton growth and is supplied to the remote areas of the ocean mainly through atmospheric dust/ash. The amount of soluble Fe in dust/ash is a major source of uncertainty in modeling-Fe dissolution and deposition to the surface ocean. Currently in the literature, there exist almost as many different methods to estimate fractional solubility as researchers in the field, making it difficult to compare results between research groups. Also, an important constraint to evaluate Fe solubility in atmospheric dust is the limited mass of sample which is usually only available in micrograms to milligrams amounts. A continuous flow (CF) method that can be run with low mass of sediments (<10mg) was tested against a standard method which require about 1g of sediments (BCR of the European Union). For validation of the CF experiment, we run both methods using South American surface sediment and deposited volcanic ash. Both materials tested are easy eroded by wind and are representative of atmospheric dust/ash exported from this region. The uncertainty of the CF method was obtained from seven replicates of one surface sediment sample, and shows very good reproducibility. The replication was conducted on different days in a span of two years and ranged between 8 and 22% (i.e., the uncertainty for the standard method was 6-19%). Compared to other standardized methods, the CF method allows studies of dissolution kinetic of metals and consumes less reagents and time (<3h). The method validated here is suggested to be used as a standardized method for Fe solubility studies on dust/ash. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. STATISTICAL VALIDATION OF SULFATE QUANTIFICATION METHODS USED FOR ANALYSIS OF ACID MINE DRAINAGE

    EPA Science Inventory

    Turbidimetric method (TM), ion chromatography (IC) and inductively coupled plasma atomic emission spectrometry (ICP-AES) with and without acid digestion have been compared and validated for the determination of sulfate in mining wastewater. Analytical methods were chosen to compa...

  8. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  9. Validation of a partial coherence interferometry method for estimating retinal shape

    PubMed Central

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496

  10. Validation of a partial coherence interferometry method for estimating retinal shape.

    PubMed

    Verkicharla, Pavan K; Suheimat, Marwan; Pope, James M; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L; Atchison, David A

    2015-09-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data.

  11. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  12. Reliability and known-group validity of the Arabic version of the 8-item Morisky Medication Adherence Scale among type 2 diabetes mellitus patients.

    PubMed

    Ashur, S T; Shamsuddin, K; Shah, S A; Bosseri, S; Morisky, D E

    2015-12-13

    No validation study has previously been made for the Arabic version of the 8-item Morisky Medication Adherence Scale (MMAS-8(©)) as a measure for medication adherence in diabetes. This study in 2013 tested the reliability and validity of the Arabic MMAS-8 for type 2 diabetes mellitus patients attending a referral centre in Tripoli, Libya. A convenience sample of 103 patients self-completed the questionnaire. Reliability was tested using Cronbach alpha, average inter-item correlation and Spearman-Brown coefficient. Known-group validity was tested by comparing MMAS-8 scores of patients grouped by glycaemic control. The Arabic version showed adequate internal consistency (α = 0.70) and moderate split-half reliability (r = 0.65). Known-group validity was supported as a significant association was found between medication adherence and glycaemic control, with a moderate effect size (ϕc = 0.34). The Arabic version displayed good psychometric properties and could support diabetes research and practice in Arab countries.

  13. A colorimetric micro method for the determination of formyl groups

    PubMed Central

    Lakshmi, S. Usha; Ramachandran, L. K.

    1969-01-01

    The characteristic purple colour formed by N-formyl-N′-2,4-dinitrophenyl-hydrazine in the presence of piperidine and acetone was made the basis of a new quantitative method for the determination of formyl groups. Samples containing N-formyl groups (up to 0·4μmole) are hydrazinolysed at 97–98° for 1hr. and are dinitrophenylated after the removal of excess of hydrazine. Interference from 2,4-dinitrophenylhydrazine is eliminated by subjecting the dinitrophenylated samples to chromatography on an alumina column. Interference arising from the formation of N-acetyl-N′-2,4-dinitrophenylhydrazine, when determining formyl groups in samples containing acetyl, can be avoided by a paper-chromatographic separation before analysis. A standard procedure is described. The method gives satisfactory results when applied to N-formyl-amino acids. Gramicidin, when analysed by this method, was found to contain 0·89 mole of formyl group/mole for a molecular weight of 1880. The method indicated the absence of formyl groups from lysozyme, a protein known not to contain such groups. Generally, the analytical values obtained by the method are within 100±4% of theory. PMID:5774469

  14. Derivation and validation of simple anthropometric equations to predict adipose tissue mass and total fat mass with MRI as the reference method

    PubMed Central

    Al-Gindan, Yasmin Y.; Hankey, Catherine R.; Govan, Lindsay; Gallagher, Dympna; Heymsfield, Steven B.; Lean, Michael E. J.

    2017-01-01

    The reference organ-level body composition measurement method is MRI. Practical estimations of total adipose tissue mass (TATM), total adipose tissue fat mass (TATFM) and total body fat are valuable for epidemiology, but validated prediction equations based on MRI are not currently available. We aimed to derive and validate new anthropometric equations to estimate MRI-measured TATM/TATFM/total body fat and compare them with existing prediction equations using older methods. The derivation sample included 416 participants (222 women), aged between 18 and 88 years with BMI between 15·9 and 40·8 (kg/m2). The validation sample included 204 participants (110 women), aged between 18 and 86 years with BMI between 15·7 and 36·4 (kg/m2). Both samples included mixed ethnic/racial groups. All the participants underwent whole-body MRI to quantify TATM (dependent variable) and anthropometry (independent variables). Prediction equations developed using stepwise multiple regression were further investigated for agreement and bias before validation in separate data sets. Simplest equations with optimal R2 and Bland–Altman plots demonstrated good agreement without bias in the validation analyses: men: TATM (kg) = 0·198 weight (kg) + 0·478 waist (cm) − 0·147 height (cm) − 12·8 (validation: R2 0·79, CV = 20 %, standard error of the estimate (SEE)=3·8 kg) and women: TATM (kg)=0·789 weight (kg) + 0·0786 age (years) − 0·342 height (cm) + 24·5 (validation: R2 0·84, CV = 13 %, SEE = 3·0 kg). Published anthropometric prediction equations, based on MRI and computed tomographic scans, correlated strongly with MRI-measured TATM: (R2 0·70 – 0·82). Estimated TATFM correlated well with published prediction equations for total body fat based on underwater weighing (R2 0·70–0·80), with mean bias of 2·5–4·9 kg, correctable with log-transformation in most equations. In conclusion, new equations, using simple anthropometric measurements, estimated MRI-measured TATM

  15. Development and validity of a method for the evaluation of printed education material

    PubMed Central

    Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso

    Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924

  16. Razalas' Grouping Method and Mathematics Achievement

    ERIC Educational Resources Information Center

    Salazar, Douglas A.

    2015-01-01

    This study aimed to raise the achievement level of students in Integral Calculus using Direct Instruction with Razalas' Method of Grouping. The study employed qualitative and quantitative analysis relative to data generated by the Achievement Test and Math journal with follow-up interview. Within the framework of the limitations of the study, the…

  17. Validation of verbal autopsy methods using hospital medical records: a case study in Vietnam.

    PubMed

    Tran, Hong Thi; Nguyen, Hoa Phuong; Walker, Sue M; Hill, Peter S; Rao, Chalapati

    2018-05-18

    Information on causes of death (COD) is crucial for measuring the health outcomes of populations and progress towards the Sustainable Development Goals. In many countries such as Vietnam where the civil registration and vital statistics (CRVS) system is dysfunctional, information on vital events will continue to rely on verbal autopsy (VA) methods. This study assesses the validity of VA methods used in Vietnam, and provides recommendations on methods for implementing VA validation studies in Vietnam. This validation study was conducted on a sample of 670 deaths from a recent VA study in Quang Ninh province. The study covered 116 cases from this sample, which met three inclusion criteria: a) the death occurred within 30 days of discharge after last hospitalisation, and b) medical records (MRs) for the deceased were available from respective hospitals, and c) the medical record mentioned that the patient was terminally ill at discharge. For each death, the underlying cause of death (UCOD) identified from MRs was compared to the UCOD from VA. The validity of VA diagnoses for major causes of death was measured using sensitivity, specificity and positive predictive value (PPV). The sensitivity of VA was at least 75% in identifying some leading CODs such as stroke, road traffic accidents and several site-specific cancers. However, sensitivity was less than 50% for other important causes including ischemic heart disease, chronic obstructive pulmonary diseases, and diabetes. Overall, there was 57% agreement between UCOD from VA and MR, which increased to 76% when multiple causes from VA were compared to UCOD from MR. Our findings suggest that VA is a valid method to ascertain UCOD in contexts such as Vietnam. Furthermore, within cultural contexts in which patients prefer to die at home instead of a healthcare facility, using the available MRs as the gold standard may be meaningful to the extent that recall bias from the interval between last hospital discharge and death

  18. Adaptation and validation in Spanish of the Group Environment Questionnaire (GEQ) with professional football players.

    PubMed

    Leo, Francisco Miguel; González-Ponce, Inmaculada; Sánchez-Oliva, David; Pulido, Juan José; García-Calvo, Tomás

    2015-01-01

    This investigation presents two studies with the goal of adapting and validating a short version of the Group Environment Questionnaire in the Spanish sport context with professional players. Study 1 used a sample of 377 male soccer players aged between 18 and 39 years ( M = 24.51, SD = 3.73), in a preliminary study using exploratory factor analysis. Study 2 used a sample of 604 professional male and female athletes, ages between 15 and 38 years ( M = 24.34, SD = 4.03). The data analyzed were collected at three moments of the season. For each measurement, we developed seven first- and second-order structures that were analyzed with confirmatory factor analysis. Study 1 indicated appropriate factorial validity (> .60) and internal consistency (> .70), with only Item 3 presenting a low factor loading (.11), so its drafting was modified in the next study. Study 2 revealed that the Spanish version of the GEQ has high levels of internal consistency (> .70) and acceptable fit index values in its original four first-order factor structure in all three measurements ( χ²/df = 4.39, CFI = .95, IFI = .95, RMSEA = .07, SRMR = .04, AIC = 271.09). Discriminant validity (from r = .45 to r = .72) and concurrent validity (from r = .21 to r = .60) also presented appropriate values. Lastly, we conducted analysis of invariance, confirming that the models established in the different measurements were invariant. The short 12-item adaptation of the GEQ to Spanish is a valid and reliable instrument to measure team cohesion in professional male and female soccer players.

  19. Validation of the new diagnosis grouping system for pediatric emergency department visits using the International Classification of Diseases, 10th Revision.

    PubMed

    Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun

    2013-12-01

    A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically

  20. Experimental validation of boundary element methods for noise prediction

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Oswald, Fred B.

    1992-01-01

    Experimental validation of methods to predict radiated noise is presented. A combined finite element and boundary element model was used to predict the vibration and noise of a rectangular box excited by a mechanical shaker. The predicted noise was compared to sound power measured by the acoustic intensity method. Inaccuracies in the finite element model shifted the resonance frequencies by about 5 percent. The predicted and measured sound power levels agree within about 2.5 dB. In a second experiment, measured vibration data was used with a boundary element model to predict noise radiation from the top of an operating gearbox. The predicted and measured sound power for the gearbox agree within about 3 dB.

  1. Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development

    PubMed Central

    Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei

    2014-01-01

    Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring

  2. Glossary of reference terms for alternative test methods and their validation.

    PubMed

    Ferrario, Daniele; Brustio, Roberta; Hartung, Thomas

    2014-01-01

    This glossary was developed to provide technical references to support work in the field of the alternatives to animal testing. It was compiled from various existing reference documents coming from different sources and is meant to be a point of reference on alternatives to animal testing. Giving the ever-increasing number of alternative test methods and approaches being developed over the last decades, a combination, revision, and harmonization of earlier published collections of terms used in the validation of such methods is required. The need to update previous glossary efforts came from the acknowledgement that new words have emerged with the development of new approaches, while others have become obsolete, and the meaning of some terms has partially changed over time. With this glossary we intend to provide guidance on issues related to the validation of new or updated testing methods consistent with current approaches. Moreover, because of new developments and technologies, a glossary needs to be a living, constantly updated document. An Internet-based version based on this compilation may be found at http://altweb.jhsph.edu/, allowing the addition of new material.

  3. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  4. Validation of Methods to Assess the Immunoglobulin Gene Repertoire in Tissues Obtained from Mice on the International Space Station.

    PubMed

    Rettig, Trisha A; Ward, Claire; Pecaut, Michael J; Chapes, Stephen K

    2017-07-01

    Spaceflight is known to affect immune cell populations. In particular, splenic B cell numbers decrease during spaceflight and in ground-based physiological models. Although antibody isotype changes have been assessed during and after space flight, an extensive characterization of the impact of spaceflight on antibody composition has not been conducted in mice. Next Generation Sequencing and bioinformatic tools are now available to assess antibody repertoires. We can now identify immunoglobulin gene- segment usage, junctional regions, and modifications that contribute to specificity and diversity. Due to limitations on the International Space Station, alternate sample collection and storage methods must be employed. Our group compared Illumina MiSeq sequencing data from multiple sample preparation methods in normal C57Bl/6J mice to validate that sample preparation and storage would not bias the outcome of antibody repertoire characterization. In this report, we also compared sequencing techniques and a bioinformatic workflow on the data output when we assessed the IgH and Igκ variable gene usage. This included assessments of our bioinformatic workflow on Illumina HiSeq and MiSeq datasets and is specifically designed to reduce bias, capture the most information from Ig sequences, and produce a data set that provides other data mining options. We validated our workflow by comparing our normal mouse MiSeq data to existing murine antibody repertoire studies validating it for future antibody repertoire studies.

  5. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  6. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    PubMed

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  7. Validated spectrophotometric methods for determination of some oral hypoglycemic drugs.

    PubMed

    Farouk, M; Abdel-Satar, O; Abdel-Aziz, O; Shaaban, M

    2011-02-01

    Four accurate, precise, rapid, reproducible, and simple spectrophotometric methods were validated for determination of repaglinide (RPG), pioglitazone hydrochloride (PGL) and rosiglitazone maleate (RGL). The first two methods were based on the formation of a charge-transfer purple-colored complex of chloranilic acid with RPG and RGL with a molar absorptivity 1.23 × 103 and 8.67 × 102 l•mol-1•cm-1 and a Sandell's sensitivity of 0.367 and 0.412 μg•cm-2, respectively, and an ion-pair yellow-colored complex of bromophenol blue with RPG, PGL and RGL with molar absorptivity 8.86 × 103, 6.95 × 103, and 7.06 × 103 l•mol-1•cm-1, respectively, and a Sandell's sensitivity of 0.051 μg•cm-2 for all ion-pair complexes. The influence of different parameters on color formation was studied to determine optimum conditions for the visible spectrophotometric methods. The other spectrophotometric methods were adopted for demtermination of the studied drugs in the presence of their acid-, alkaline- and oxidative-degradates by computing derivative and pH-induced difference spectrophotometry, as stability-indicating techniques. All the proposed methods were validated according to the International Conference on Harmonization guidelines and successfully applied for determination of the studied drugs in pure form and in pharmaceutical preparations with good extraction recovery ranges between 98.7-101.4%, 98.2-101.3%, and 99.9-101.4% for RPG, PGL, and RGL, respectively. Results of relative standard deviations did not exceed 1.6%, indicating that the proposed methods having good repeatability and reproducibility. All the obtained results were statistically compared to the official method used for RPG analysis and the manufacturers methods used for PGL and RGL analysis, respectively, where no significant differences were found.

  8. Experimental Validation of Normalized Uniform Load Surface Curvature Method for Damage Localization

    PubMed Central

    Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo

    2015-01-01

    In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise. PMID:26501286

  9. Vacuum decay container closure integrity leak test method development and validation for a lyophilized product-package system.

    PubMed

    Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton

    2011-01-01

    A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity

  10. Determination of GHB in human hair by HPLC-MS/MS: Development and validation of a method and application to a study group and three possible single exposure cases.

    PubMed

    Bertol, Elisabetta; Mari, Francesco; Vaiano, Fabio; Romano, Guido; Zaami, Simona; Baglìo, Giovanni; Busardò, Francesco Paolo

    2015-05-01

    Gamma-hydroxybutyrate (GHB) over the last two decades has generated increased notoriety as a euphoric and disinhibiting drug of abuse in cases of drug-related sexual assault and for this reason it is considered a 'date rape' drug. The first aim of this paper was to develop and fully validate a method for the detection of GHB in human hair by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) after liquid-liquid extraction (LLE). The second aim was the application of the method to hair samples of 30 GHB-free users in order to determine the basal level. The results obtained showed no significant differences in endogenous concentrations (p = 0.556) between hair samples of the three groups (black, blonde, and dyed hair) and the age and sex of the subjects did not affect the endogenous levels. Another 12 healthy volunteers, with no previous history of GHB use, were selected and a single dose (25 mg/Kg) was orally administered to all of them and hair samples were collected before the administration of the single dose and other two samples were collected one month and two months later, respectively. The segmental analysis of the latter two samples allowed us to calculate two ratios: 4.45:1 (95% C.I. 3.52-5.63) and 3.35:1 (95% C.I. 2.14-5.18), respectively, which can be recommended as reasonable values for a positive identification of GHB intake. Finally the method was applied to three real cases where a GHB single exposure probably occurred. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less

  12. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  13. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    ERIC Educational Resources Information Center

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  14. Methods to validate the accuracy of an indirect calorimeter in the in-vitro setting.

    PubMed

    Oshima, Taku; Ragusa, Marco; Graf, Séverine; Dupertuis, Yves Marc; Heidegger, Claudia-Paula; Pichard, Claude

    2017-12-01

    The international ICALIC initiative aims at developing a new indirect calorimeter according to the needs of the clinicians and researchers in the field of clinical nutrition and metabolism. The project initially focuses on validating the calorimeter for use in mechanically ventilated acutely ill adult patient. However, standard methods to validate the accuracy of calorimeters have not yet been established. This paper describes the procedures for the in-vitro tests to validate the accuracy of the new indirect calorimeter, and defines the ranges for the parameters to be evaluated in each test to optimize the validation for clinical and research calorimetry measurements. Two in-vitro tests have been defined to validate the accuracy of the gas analyzers and the overall function of the new calorimeter. 1) Gas composition analysis allows validating the accuracy of O 2 and CO 2 analyzers. Reference gas of known O 2 (or CO 2 ) concentration is diluted by pure nitrogen gas to achieve predefined O 2 (or CO 2 ) concentration, to be measured by the indirect calorimeter. O 2 and CO 2 concentrations to be tested were determined according to their expected ranges of concentrations during calorimetry measurements. 2) Gas exchange simulator analysis validates O 2 consumption (VO 2 ) and CO 2 production (VCO 2 ) measurements. CO 2 gas injection into artificial breath gas provided by the mechanical ventilator simulates VCO 2 . Resulting dilution of O 2 concentration in the expiratory air is analyzed by the calorimeter as VO 2 . CO 2 gas of identical concentration to the fraction of inspired O 2 (FiO 2 ) is used to simulate identical VO 2 and VCO 2 . Indirect calorimetry results from publications were analyzed to determine the VO 2 and VCO 2 values to be tested for the validation. O 2 concentration in respiratory air is highest at inspiration, and can decrease to 15% during expiration. CO 2 concentration can be as high as 5% in expired air. To validate analyzers for measurements of Fi

  15. Comparison of validation methods for forming simulations

    NASA Astrophysics Data System (ADS)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  16. Unexpected but Most Welcome: Mixed Methods for the Validation and Revision of the Participatory Evaluation Measurement Instrument

    ERIC Educational Resources Information Center

    Daigneault, Pierre-Marc; Jacob, Steve

    2014-01-01

    Although combining methods is nothing new, more contributions about why and how to mix methods for validation purposes are needed. This article presents a case of validating the inferences drawn from the Participatory Evaluation Measurement Instrument, an instrument that purports to measure stakeholder participation in evaluation. Although the…

  17. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  18. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    PubMed

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  19. Performance Validity Testing in Neuropsychology: Methods for Measurement Development and Maximizing Diagnostic Accuracy.

    PubMed

    Wodushek, Thomas R; Greher, Michael R

    2017-05-01

    In the first column in this 2-part series, Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review, the authors introduced performance validity tests (PVTs) and their function, provided a justification for why they are necessary, traced their ongoing endorsement by neuropsychological organizations, and described how they are used and interpreted by ever increasing numbers of clinical neuropsychologists. To enhance readers' understanding of these measures, this second column briefly describes common detection strategies used in PVTs as well as the typical methods used to validate new PVTs and determine cut scores for valid/invalid determinations. We provide a discussion of the latest research demonstrating how neuropsychologists can combine multiple PVTs in a single battery to improve sensitivity/specificity to invalid responding. Finally, we discuss future directions for the research and application of PVTs.

  20. Validated UV-spectrophotometric method for the evaluation of the efficacy of makeup remover.

    PubMed

    Charoennit, P; Lourith, N

    2012-04-01

    A UV-spectrophotometric method for the analysis of makeup remover was developed and validated according to ICH guidelines. Three makeup removers for which the main ingredients consisted of vegetable oil (A), mineral oil and silicone (B) and mineral oil and water (C) were sampled in this study. Ethanol was the optimal solvent because it did not interfere with the maximum absorbance of the liquid foundation at 250 nm. The linearity was determined over a range of makeup concentrations from 0.540 to 1.412 mg mL⁻¹ (R² = 0.9977). The accuracy of this method was determined by analysing low, intermediate and high concentrations of the liquid foundation and gave 78.59-91.57% recoveries with a relative standard deviation of <2% (0.56-1.45%). This result demonstrates the validity and reliability of this method. The reproducibilities were 97.32 ± 1.79, 88.34 ± 2.69 and 95.63 ± 2.94 for preparations A, B and C respectively, which are within the acceptable limits set forth by the ASEAN analytical validation guidelines, which ensure the precision of the method under the same operating conditions over a short time interval and the inter-assay precision within the laboratory. The proposed method is therefore a simple, rapid, accurate, precise and inexpensive technique for the routine analysis of makeup remover efficacy. © 2011 The Authors. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  1. Effect of patient selection method on provider group performance estimates.

    PubMed

    Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A

    2011-08-01

    Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.

  2. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Validity of Dietary Assessment in Athletes: A Systematic Review

    PubMed Central

    Beck, Kathryn L.; Gifford, Janelle A.; Slater, Gary; Flood, Victoria M.; O’Connor, Helen

    2017-01-01

    Dietary assessment methods that are recognized as appropriate for the general population are usually applied in a similar manner to athletes, despite the knowledge that sport-specific factors can complicate assessment and impact accuracy in unique ways. As dietary assessment methods are used extensively within the field of sports nutrition, there is concern the validity of methodologies have not undergone more rigorous evaluation in this unique population sub-group. The purpose of this systematic review was to compare two or more methods of dietary assessment, including dietary intake measured against biomarkers or reference measures of energy expenditure, in athletes. Six electronic databases were searched for English-language, full-text articles published from January 1980 until June 2016. The search strategy combined the following keywords: diet, nutrition assessment, athlete, and validity; where the following outcomes are reported but not limited to: energy intake, macro and/or micronutrient intake, food intake, nutritional adequacy, diet quality, or nutritional status. Meta-analysis was performed on studies with sufficient methodological similarity, with between-group standardized mean differences (or effect size) and 95% confidence intervals (CI) being calculated. Of the 1624 studies identified, 18 were eligible for inclusion. Studies comparing self-reported energy intake (EI) to energy expenditure assessed via doubly labelled water were grouped for comparison (n = 11) and demonstrated mean EI was under-estimated by 19% (−2793 ± 1134 kJ/day). Meta-analysis revealed a large pooled effect size of −1.006 (95% CI: −1.3 to −0.7; p < 0.001). The remaining studies (n = 7) compared a new dietary tool or instrument to a reference method(s) (e.g., food record, 24-h dietary recall, biomarker) as part of a validation study. This systematic review revealed there are limited robust studies evaluating dietary assessment methods in athletes. Existing literature

  4. Further assessment of a method to estimate reliability and validity of qualitative research findings.

    PubMed

    Hinds, P S; Scandrett-Hibden, S; McAulay, L S

    1990-04-01

    The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.

  5. CEOS WGCV Land Product Validation (LPV) Sub-Group: Current and Potential Roles in Future Decadal Survey Missions

    NASA Technical Reports Server (NTRS)

    Roman, Miguel O.; Nightingale, Joanne; Nickeson, Jaime; Schaepman-Strub, Gabriela

    2011-01-01

    The goals and objectives of the sub group are: To foster and coordinate quantitative validation of higher level global land products derive d from remotely sensed data, in a traceable way, and to relay results so they are relevant to users. and to increase the quality and effi ciency of global satellite product validation by developing and promo ting international standards and protocols for: (1) Field sampling (2) Scaling techniques (3) Accuracy reporting (4) Data / information exchange also to provide feedback to international structures (GEOSS ) for: (1) Requirements on product accuracy and quality assurance (QA 4EO) (2) Terrestrial ECV measurement standards (3) Definitions for f uture missions

  6. Validation of quantitative method for azoxystrobin residues in green beans and peas.

    PubMed

    Abdelraheem, Ehab M H; Hassan, Sayed M; Arief, Mohamed M H; Mohammad, Somaia G

    2015-09-01

    This study presents a method validation for extraction and quantitative analysis of azoxystrobin residues in green beans and peas using HPLC-UV and the results confirmed by GC-MS. The employed method involved initial extraction with acetonitrile after the addition of salts (magnesium sulfate and sodium chloride), followed by a cleanup step by activated neutral carbon. Validation parameters; linearity, matrix effect, LOQ, specificity, trueness and repeatability precision were attained. The spiking levels for the trueness and the precision experiments were (0.1, 0.5, 3 mg/kg). For HPLC-UV analysis, mean recoveries ranged between 83.69% to 91.58% and 81.99% to 107.85% for green beans and peas, respectively. For GC-MS analysis, mean recoveries ranged from 76.29% to 94.56% and 80.77% to 100.91% for green beans and peas, respectively. According to these results, the method has been proven to be efficient for extraction and determination of azoxystrobin residues in green beans and peas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less

  8. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Validation of a Method To Screen for Pulmonary Hypertension in Advanced Idiopathic Pulmonary Fibrosis*

    PubMed Central

    Zisman, David A.; Karlamangla, Arun S.; Kawut, Steven M.; Shlobin, Oksana A.; Saggar, Rajeev; Ross, David J.; Schwarz, Marvin I.; Belperio, John A.; Ardehali, Abbas; Lynch, Joseph P.; Nathan, Steven D.

    2008-01-01

    Background We have developed a method to screen for pulmonary hypertension (PH) in idiopathic pulmonary fibrosis (IPF) patients, based on a formula to predict mean pulmonary artery pressure (MPAP) from standard lung function measurements. The objective of this study was to validate this method in a separate group of IPF patients. Methods Cross-sectional study of 60 IPF patients from two institutions. The accuracy of the MPAP estimation was assessed by examining the correlation between the predicted and measured MPAPs and the magnitude of the estimation error. The discriminatory ability of the method for PH was assessed using the area under the receiver operating characteristic curve (AUC). Results There was strong correlation in the expected direction between the predicted and measured MPAPs (r = 0.72; p < 0.0001). The estimated MPAP was within 5 mm Hg of the measured MPAP 72% of the time. The AUC for predicting PH was 0.85, and did not differ by institution. A formula-predicted MPAP > 21 mm Hg was associated with a sensitivity, specificity, positive predictive value, and negative predictive value of 95%, 58%, 51%, and 96%, respectively, for PH defined as MPAP from right-heart catheterization > 25 mm Hg. Conclusions A prediction formula for MPAP using standard lung function measurements can be used to screen for PH in IPF patients. PMID:18198245

  10. Validity of a Simulation Game as a Method for History Teaching

    ERIC Educational Resources Information Center

    Corbeil, Pierre; Laveault, Dany

    2011-01-01

    The aim of this research is, first, to determine the validity of a simulation game as a method of teaching and an instrument for the development of reasoning and, second, to study the relationship between learning and students' behavior toward games. The participants were college students in a History of International Relations course, with two…

  11. Quality of Life on Arterial Hypertension: Validity of Known Groups of MINICHAL

    PubMed Central

    Soutello, Ana Lúcia Soares; Rodrigues, Roberta Cunha Matheus; Jannuzzi, Fernanda Freire; São-João, Thaís Moreira; Martini, Gabriela Giordano; Nadruz Jr., Wilson; Gallani, Maria-Cecília Bueno Jayme

    2015-01-01

    Introductions In the care of hypertension, it is important that health professionals possess available tools that allow evaluating the impairment of the health-related quality of life, according to the severity of hypertension and the risk for cardiovascular events. Among the instruments developed for the assessment of health-related quality of life, there is the Mini-Cuestionario of Calidad de Vida en la Hipertensión Arterial (MINICHAL) recently adapted to the Brazilian culture. Objective To estimate the validity of known groups of the Brazilian version of the MINICHAL regarding the classification of risk for cardiovascular events, symptoms, severity of dyspnea and target-organ damage. Methods Data of 200 hypertensive outpatients concerning sociodemographic and clinical information and health-related quality of life were gathered by consulting the medical charts and the application of the Brazilian version of MINICHAL. The Mann-Whitney test was used to compare health-related quality of life in relation to symptoms and target-organ damage. The Kruskal-Wallis test and ANOVA with ranks transformation were used to compare health-related quality of life in relation to the classification of risk for cardiovascular events and intensity of dyspnea, respectively. Results The MINICHAL was able to discriminate health-related quality of life in relation to symptoms and kidney damage, but did not discriminate health-related quality of life in relation to the classification of risk for cardiovascular events. Conclusion The Brazilian version of the MINICHAL is a questionnaire capable of discriminating differences on the health‑related quality of life regarding dyspnea, chest pain, palpitation, lipothymy, cephalea and renal damage. PMID:25993593

  12. Validated spectrophotometric methods for simultaneous determination of troxerutin and carbazochrome in dosage form

    NASA Astrophysics Data System (ADS)

    Khattab, Fatma I.; Ramadan, Nesrin K.; Hegazy, Maha A.; Al-Ghobashy, Medhat A.; Ghoniem, Nermine S.

    2015-03-01

    Four simple, accurate, sensitive and precise spectrophotometric methods were developed and validated for simultaneous determination of Troxerutin (TXN) and Carbazochrome (CZM) in their bulk powders, laboratory prepared mixtures and pharmaceutical dosage forms. Method A is first derivative spectrophotometry (D1) where TXN and CZM were determined at 294 and 483.5 nm, respectively. Method B is first derivative of ratio spectra (DD1) where the peak amplitude at 248 for TXN and 439 nm for CZM were used for their determination. Method C is ratio subtraction (RS); in which TXN was determined at its λmax (352 nm) in the presence of CZM which was determined by D1 at 483.5 nm. While, method D is mean centering of the ratio spectra (MCR) in which the mean centered values at 300 nm and 340.0 nm were used for the two drugs in a respective order. The two compounds were simultaneously determined in the concentration ranges of 5.00-50.00 μg mL-1 and 0.5-10.0 μg mL-1 for TXN and CZM, respectively. The methods were validated according to the ICH guidelines and the results were statistically compared to the manufacturer's method.

  13. Validated spectrophotometric methods for simultaneous determination of troxerutin and carbazochrome in dosage form.

    PubMed

    Khattab, Fatma I; Ramadan, Nesrin K; Hegazy, Maha A; Al-Ghobashy, Medhat A; Ghoniem, Nermine S

    2015-03-15

    Four simple, accurate, sensitive and precise spectrophotometric methods were developed and validated for simultaneous determination of Troxerutin (TXN) and Carbazochrome (CZM) in their bulk powders, laboratory prepared mixtures and pharmaceutical dosage forms. Method A is first derivative spectrophotometry (D(1)) where TXN and CZM were determined at 294 and 483.5 nm, respectively. Method B is first derivative of ratio spectra (DD(1)) where the peak amplitude at 248 for TXN and 439 nm for CZM were used for their determination. Method C is ratio subtraction (RS); in which TXN was determined at its λmax (352 nm) in the presence of CZM which was determined by D(1) at 483.5 nm. While, method D is mean centering of the ratio spectra (MCR) in which the mean centered values at 300 nm and 340.0 nm were used for the two drugs in a respective order. The two compounds were simultaneously determined in the concentration ranges of 5.00-50.00 μg mL(-1) and 0.5-10.0 μg mL(-1) for TXN and CZM, respectively. The methods were validated according to the ICH guidelines and the results were statistically compared to the manufacturer's method. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Validation of the Abdominal Pain Index Using a Revised Scoring Method

    PubMed Central

    Sherman, Amanda L.; Smith, Craig A.; Walker, Lynn S.

    2015-01-01

    Objective Evaluate the psychometric properties of child- and parent-report versions of the four-item Abdominal Pain Index (API) in children with functional abdominal pain (FAP) and healthy controls, using a revised scoring method that facilitates comparisons of scores across samples and time. Methods Pediatric patients aged 8–18 years with FAP and controls completed the API at baseline (N = 1,967); a subset of their parents (N = 290) completed the API regarding the child’s pain. Subsets of patients completed follow-up assessments at 2 weeks (N = 231), 3 months (N = 330), and 6 months (N = 107). Subsets of both patients (N = 389) and healthy controls (N = 172) completed a long-term follow-up assessment (mean age at follow-up = 20.21 years, SD = 3.75). Results The API demonstrated good concurrent, discriminant, and construct validity, as well as good internal consistency. Conclusion We conclude that the API, using the revised scoring method, is a useful, reliable, and valid measure of abdominal pain severity. PMID:25617048

  15. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  16. Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study

    ERIC Educational Resources Information Center

    Ogilvie, Emily; McCrudden, Matthew T.

    2017-01-01

    An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…

  17. Method validation for methanol quantification present in working places

    NASA Astrophysics Data System (ADS)

    Muna, E. D. M.; Bizarri, C. H. B.; Maciel, J. R. M.; da Rocha, G. P.; de Araújo, I. O.

    2015-01-01

    Given the widespread use of methanol by different industry sectors and high toxicity associated with this substance, it is necessary to use an analytical method able to determine in a sensitive, precise and accurate levels of methanol in the air of working environments. Based on the methodology established by the National Institute for Occupational Safety and Health (NIOSH), it was validated a methodology for determination of methanol in silica gel tubes which had demonstrated its effectiveness based on the participation of the international collaborative program sponsored by the American Industrial Hygiene Association (AIHA).

  18. Investigation of a Sybr-Green-Based Method to Validate DNA Sequences for DNA Computing

    DTIC Science & Technology

    2005-05-01

    OF A SYBR-GREEN-BASED METHOD TO VALIDATE DNA SEQUENCES FOR DNA COMPUTING 6. AUTHOR(S) Wendy Pogozelski, Salvatore Priore, Matthew Bernard ...simulated annealing. Biochemistry, 35, 14077-14089. 15 Pogozelski, W.K., Bernard , M.P. and Macula, A. (2004) DNA code validation using...and Clark, B.F.C. (eds) In RNA Biochemistry and Biotechnology, NATO ASI Series, Kluwer Academic Publishers. Zucker, M. and Stiegler , P. (1981

  19. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  20. Optimization and validation of a minicolumn method for determining aflatoxins in copra meal.

    PubMed

    Arim, R H; Aguinaldo, A R; Tanaka, T; Yoshizawa, T

    1999-01-01

    A minicolumn (MC) method for determining aflatoxins in copra meal was optimized and validated. The method uses methanol-4% KCl solution as extractant and CuSO4 solution as clarifying agent. The chloroform extract is applied to an MC that incorporates "lahar," an indigenous material, as substitute for silica gel. The "lahar"-containing MC produces a more distinct and intense blue fluoresence on the Florisil layer than an earlier MC. The method has a detection limit of 15 micrograms total aflatoxins/kg sample. Confirmatory tests using 50% H2SO4 and trifluoroacetic acid in benzene with 25% HNO3 showed that copra meal samples contained aflatoxins and no interfering agents. The MC responses of the copra meal samples were in good agreement with their behavior in thin-layer chromatography. This modified MC method is accurate, giving linearity-valid results; rapid, being done in 15 min; economical, using low-volume reagents; relatively safe, having low-exposure risk of analysts to chemicals; and simple, making its field application feasible.

  1. A systematic and critical review on bioanalytical method validation using the example of simultaneous quantitation of antidiabetic agents in blood.

    PubMed

    Fachi, Mariana Millan; Leonart, Letícia Paula; Cerqueira, Letícia Bonancio; Pontes, Flavia Lada Degaut; de Campos, Michel Leandro; Pontarolo, Roberto

    2017-06-15

    A systematic and critical review was conducted on bioanalytical methods validated to quantify combinations of antidiabetic agents in human blood. The aim of this article was to verify how the validation process of bioanalytical methods is performed and the quality of the published records. The validation assays were evaluated according to international guidelines. The main problems in the validation process are pointed out and discussed to help researchers to choose methods that are truly reliable and can be successfully applied for their intended use. The combination of oral antidiabetic agents was chosen as these are some of the most studied drugs and several methods are present in the literature. Moreover, this article may be applied to the validation process of all bioanalytical. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Self-Validation Method for High-Temperature Thermocouples Under Oxidizing Atmospheres

    NASA Astrophysics Data System (ADS)

    Mokdad, S.; Failleau, G.; Deuzé, T.; Briaudeau, S.; Kozlova, O.; Sadli, M.

    2015-08-01

    Thermocouples are prone to significant drift in use particularly when they are exposed to high temperatures. Indeed, high-temperature exposure can affect the response of a thermocouple progressively by changing the structure of the thermoelements and inducing inhomogeneities. Moreover, an oxidizing atmosphere contributes to thermocouple drift by changing the chemical nature of the metallic wires by the effect of oxidation. In general, severe uncontrolled drift of thermocouples results from these combined influences. A periodic recalibration of the thermocouple can be performed, but sometimes it is not possible to remove the sensor out of the process. Self-validation methods for thermocouples provide a solution to avoid this drawback, but there are currently no high-temperature contact thermometers with self-validation capability at temperatures up to . LNE-Cnam has developed fixed-point devices integrated to the thermocouples consisting of machined alumina-based devices for operation under oxidizing atmospheres. These devices require small amounts of pure metals (typically less than 2 g). They are suitable for self-validation of high-temperature thermocouples up to . In this paper the construction and the characterization of these integrated fixed-point devices are described. The phase-transition plateaus of gold, nickel, and palladium, which enable coverage of the temperature range between and , are assessed with this self-validation technique. Results of measurements performed at LNE-Cnam with the integrated self-validation module at several levels of temperature will be presented. The performance of the devices are assessed and discussed, in terms of robustness and metrological characteristics. Uncertainty budgets are also proposed and detailed.

  3. Clinical Validation of Heart Rate Apps: Mixed-Methods Evaluation Study.

    PubMed

    Vandenberk, Thijs; Stans, Jelle; Mortelmans, Christophe; Van Haelst, Ruth; Van Schelvergem, Gertjan; Pelckmans, Caroline; Smeets, Christophe Jp; Lanssens, Dorien; De Cannière, Hélène; Storms, Valerie; Thijs, Inge M; Vaes, Bert; Vandervoort, Pieter M

    2017-08-25

    Photoplethysmography (PPG) is a proven way to measure heart rate (HR). This technology is already available in smartphones, which allows measuring HR only by using the smartphone. Given the widespread availability of smartphones, this creates a scalable way to enable mobile HR monitoring. An essential precondition is that these technologies are as reliable and accurate as the current clinical (gold) standards. At this moment, there is no consensus on a gold standard method for the validation of HR apps. This results in different validation processes that do not always reflect the veracious outcome of comparison. The aim of this paper was to investigate and describe the necessary elements in validating and comparing HR apps versus standard technology. The FibriCheck (Qompium) app was used in two separate prospective nonrandomized studies. In the first study, the HR of the FibriCheck app was consecutively compared with 2 different Food and Drug Administration (FDA)-cleared HR devices: the Nonin oximeter and the AliveCor Mobile ECG. In the second study, a next step in validation was performed by comparing the beat-to-beat intervals of the FibriCheck app to a synchronized ECG recording. In the first study, the HR (BPM, beats per minute) of 88 random subjects consecutively measured with the 3 devices showed a correlation coefficient of .834 between FibriCheck and Nonin, .88 between FibriCheck and AliveCor, and .897 between Nonin and AliveCor. A single way analysis of variance (ANOVA; P=.61 was executed to test the hypothesis that there were no significant differences between the HRs as measured by the 3 devices. In the second study, 20,298 (ms) R-R intervals (RRI)-peak-to-peak intervals (PPI) from 229 subjects were analyzed. This resulted in a positive correlation (rs=.993, root mean square deviation [RMSE]=23.04 ms, and normalized root mean square error [NRMSE]=0.012) between the PPI from FibriCheck and the RRI from the wearable ECG. There was no significant difference

  4. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Validated Spectrophotometric and RP-HPLC-DAD Methods for the Determination of Ursodeoxycholic Acid Based on Derivatization with 2-Nitrophenylhydrazine.

    PubMed

    El-Kafrawy, Dina S; Belal, Tarek S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2017-05-01

    This work describes the development, validation, and application of two simple, accurate, and reliable methods for the determination of ursodeoxycholic acid (UDCA) in bulk powder and in pharmaceutical dosage forms. The carboxylic acid group in UDCA was exploited for the development of these novel methods. Method 1 is the colorimetric determination of the drug based on its reaction with 2-nitrophenylhydrazine hydrochloride in the presence of a water-soluble carbodiimide coupler [1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide hydrochloride] and pyridine to produce an acid hydrazide derivative, which ionizes to yield an intense violet color with maximum absorption at 553 nm. Method 2 uses reversed-phase HPLC with diode-array detection for the determination of UDCA after precolumn derivatization using the same reaction mentioned above. The acid hydrazide reaction product was separated using a Pinnacle DB C8 column (4.6 × 150 mm, 5 μm particle size) and a mobile phase consisting of 0.01 M acetate buffer (pH 3)-methanol-acetonitrile (30 + 30 + 40, v/v/v) isocratically pumped at a flow rate of 1 mL/min. Ibuprofen was used as the internal standard (IS). The peaks of the reaction product and IS were monitored at 400 nm. Different experimental parameters for both methods were carefully optimized. Analytical performance of the developed methods were statistically validated for linearity, range, precision, accuracy, specificity, robustness, LOD, and LOQ. Calibration curves showed good linear relationships for concentration ranges 32-192 and 60-600 μg/mL for methods 1 and 2, respectively. The proposed methods were successfully applied for the assay of UDCA in bulk form, capsules, and oral suspension with good accuracy and precision. Assay results were statistically compared with a reference pharmacopeial HPLC method, and no significant differences were observed between proposed and reference methods.

  6. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  7. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    PubMed

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  9. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  10. The Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE): Construct and Content Validation Using a Modified Delphi Method.

    PubMed

    Paquette-Warren, Jann; Tyler, Marie; Fournie, Meghan; Harris, Stewart B

    2017-06-01

    In order to scale-up successful innovations, more evidence is needed to evaluate programs that attempt to address the rising prevalence of diabetes and the associated burdens on patients and the healthcare system. This study aimed to assess the construct and content validity of the Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE), a tool developed to guide the evaluation, design and implementation with built-in knowledge translation principles. A modified Delphi method, including 3 individual rounds (questionnaire with 7-point agreement/importance Likert scales and/or open-ended questions) and 1 group round (open discussion) were conducted. Twelve experts in diabetes, research, knowledge translation, evaluation and policy from Canada (Ontario, Quebec and British Columbia) and Australia participated. Quantitative consensus criteria were an interquartile range of ≤1. Qualitative data were analyzed thematically and confirmed by participants. An importance scale was used to determine a priority multi-level indicator set. Items rated very or extremely important by 80% or more of the experts were reviewed in the final group round to build the final set. Participants reached consensus on the content and construct validity of DEFINE, including its title, overall goal, 5-step evaluation approach, medical and nonmedical determinants of health schematics, full list of indicators and associated measurement tools, priority multi-level indicator set and next steps in DEFINE's development. Validated by experts, DEFINE has the right theoretic components to evaluate comprehensively diabetes prevention and management programs and to support acquisition of evidence that could influence the knowledge translation of innovations to reduce the burden of diabetes. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Factorial Invariance and Convergent Validity of the Group-Based Medical Mistrust Scale across Gender and Ethnoracial Identity.

    PubMed

    Wheldon, Christopher W; Kolar, Stephanie K; Hernandez, Natalie D; Daley, Ellen M

    2017-01-01

    The objective of this study was to assess the factorial invariance and convergent validity of the Group-Based Medical Mistrust Scale (GBMMS) across gender (male and female) and ethnoracial identity (Latino and Black). Minority students (N = 686) attending a southeastern university were surveyed in the fall of 2011. Psychometric analysis of the GBMMS was performed. A three-factor solution fit the data after the omission of two problematic items. This revised version of the GBMMS exhibited sufficient configural, metric, and scalar invariance. Convergence of the GBMMS with conceptually related measures provided further evidence of validity; however, there was variation across ethnoracial identity. The GBMMS has viable psychometric properties across gender and ethnoracial identity in Black and Latino populations.

  12. Softcopy quality ruler method: implementation and validation

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying

    2009-01-01

    A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.

  13. Multicenter validation study of a transplantation-specific cytogenetics grouping scheme for patients with myelodysplastic syndromes.

    PubMed

    Armand, P; Deeg, H J; Kim, H T; Lee, H; Armistead, P; de Lima, M; Gupta, V; Soiffer, R J

    2010-05-01

    Cytogenetics is an important prognostic factor for patients with myelodysplastic syndromes (MDS). However, existing cytogenetics grouping schemes are based on patients treated with supportive care, and may not be optimal for patients undergoing allo-SCT. We proposed earlier an SCT-specific cytogenetics grouping scheme for patients with MDS and AML arising from MDS, based on an analysis of patients transplanted at the Dana-Farber Cancer Institute/Brigham and Women's Hospital. Under this scheme, abnormalities of chromosome 7 and complex karyotype are considered adverse risk, whereas all others are considered standard risk. In this retrospective study, we validated this scheme on an independent multicenter cohort of 546 patients. Adverse cytogenetics was the strongest prognostic factor for outcome in this cohort. The 4-year relapse-free survival and OS were 42 and 46%, respectively, in the standard-risk group, vs 21 and 23% in the adverse group (P<0.0001 for both comparisons). This grouping scheme retained its prognostic significance irrespective of patient age, disease type, earlier leukemogenic therapy and conditioning intensity. Therapy-related disease was not associated with increased mortality in this cohort, after taking cytogenetics into account. We propose that this SCT-specific cytogenetics grouping scheme be used for patients with MDS or AML arising from MDS who are considering or undergoing SCT.

  14. A Design to Improve Internal Validity of Assessments of Teaching Demonstrations

    ERIC Educational Resources Information Center

    Bartsch, Robert A.; Engelhardt Bittner, Wendy M.; Moreno, Jesse E., Jr.

    2008-01-01

    Internal validity is important in assessing teaching demonstrations both for one's knowledge and for quality assessment demanded by outside sources. We describe a method to improve the internal validity of assessments of teaching demonstrations: a 1-group pretest-posttest design with alternative forms. This design is often more practical and…

  15. Differential Item Functioning Detection Across Two Methods of Defining Group Comparisons

    PubMed Central

    Sari, Halil Ibrahim

    2014-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF studies. In this study, a simulation was conducted based on data from a 60-item ACT Mathematics test (ACT; Hanson & Béguin). The unsigned area measure method (Raju) was used as the DIF detection method. An application to operational data was also completed in the study, as well as a comparison of observed Type I error rates and false discovery rates across the two methods of defining groups. Results indicate that the amount of flagged DIF or interpretations about DIF in all conditions were not the same across the two methods, and there may be some benefits to using composite group approaches. The results are discussed in connection to differing definitions of fairness. Recommendations for practice are made. PMID:29795837

  16. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    PubMed

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  17. Validated selective spectrophotometric methods for the kinetic determination of desloratidine in tablets and in the presence of its parent drug.

    PubMed

    Derayea, S M Sayed

    2014-11-01

    Two novel selective validated methods have been developed for analysis of desloratidine (DSL) in its tablets formulation. Both were kinetic spectrophotometric methods, depend on the interaction of the secondary amino group in DSL with acetaldehyde to give N-vinylpiperidyl product. The formed N-vinylpiperidyl compound was reacted with 2,3,5,6-tetrachloro-1,4-benzoquinone (chloranil) to form colored N-vinylpiperidyl-substituted benzoquinone derivatives. The formed blue-colored derivative was measured at 672 nm. The reaction conditions were carefully studied and all factors were optimized. The molar ratio between the reactants was estimated and a suggested reaction mechanism was presented. The analysis was carried out using initial rate and fixed time (at 6 min) methods. The linear concentration ranges were 3-50 and 10 - 60 μg mL-1 with limits of detection of 3.2 and 2.2 μg mL-1 for the initial rate and fixed time methods, respectively. ICH guidelines were applied for analytical performance validation of the proposed methods. The presence of common excipients in the pharmaceutical formulation did not produce any significant interference, as well as from loratadine, which is the parent compound of DSL. Different commercially available tablets formulations containing were successfully analyzed, with, the percentage recovery ranging from 97.28-100.90 ± 0.7 2-1.41%. The obtained results were compared statistically with the reported method results. The proposed methods have similar accuracy and precision as the reported as indicated from the F- and t-test data.

  18. Validation of cleaning method for various parts fabricated at a Beryllium facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Cynthia M.

    This study evaluated and documented a cleaning process that is used to clean parts that are fabricated at a beryllium facility at Los Alamos National Laboratory. The purpose of evaluating this cleaning process was to validate and approve it for future use to assure beryllium surface levels are below the Department of Energy’s release limits without the need to sample all parts leaving the facility. Inhaling or coming in contact with beryllium can cause an immune response that can result in an individual becoming sensitized to beryllium, which can then lead to a disease of the lungs called chronic berylliummore » disease, and possibly lung cancer. Thirty aluminum and thirty stainless steel parts were fabricated on a lathe in the beryllium facility, as well as thirty-two beryllium parts, for the purpose of testing a parts cleaning method that involved the use of ultrasonic cleaners. A cleaning method was created, documented, validated, and approved, to reduce beryllium contamination.« less

  19. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    ERIC Educational Resources Information Center

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  20. Differential Item Functioning Detection across Two Methods of Defining Group Comparisons: Pairwise and Composite Group Comparisons

    ERIC Educational Resources Information Center

    Sari, Halil Ibrahim; Huggins, Anne Corinne

    2015-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…

  1. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  2. Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves.

    PubMed

    Katekhaye, S; Kale, M S; Laddha, K S

    2012-01-01

    A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C(18) column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r(2)>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves.

  3. Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves

    PubMed Central

    Katekhaye, S; Kale, M. S.; Laddha, K. S.

    2012-01-01

    A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C18 column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r2>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves. PMID:23204626

  4. Bridging the Gap Between Validation and Implementation of Non-Animal Veterinary Vaccine Potency Testing Methods.

    PubMed

    Dozier, Samantha; Brown, Jeffrey; Currie, Alistair

    2011-11-29

    In recent years, technologically advanced high-throughput techniques have been developed that replace, reduce or refine animal use in vaccine quality control tests. Following validation, these tests are slowly being accepted for use by international regulatory authorities. Because regulatory acceptance itself has not guaranteed that approved humane methods are adopted by manufacturers, various organizations have sought to foster the preferential use of validated non-animal methods by interfacing with industry and regulatory authorities. After noticing this gap between regulation and uptake by industry, we began developing a paradigm that seeks to narrow the gap and quicken implementation of new replacement, refinement or reduction guidance. A systematic analysis of our experience in promoting the transparent implementation of validated non-animal vaccine potency assays has led to the refinement of our paradigmatic process, presented here, by which interested parties can assess the local regulatory acceptance of methods that reduce animal use and integrate them into quality control testing protocols, or ensure the elimination of peripheral barriers to their use, particularly for potency and other tests carried out on production batches.

  5. A validated ultra high pressure liquid chromatographic method for qualification and quantification of folic acid in pharmaceutical preparations.

    PubMed

    Deconinck, E; Crevits, S; Baten, P; Courselle, P; De Beer, J

    2011-04-05

    A fully validated UHPLC method for the identification and quantification of folic acid in pharmaceutical preparations was developed. The starting conditions for the development were calculated starting from the HPLC conditions of a validated method. These start conditions were tested on four different UHPLC columns: Grace Vision HT™ C18-P, C18, C18-HL and C18-B (2 mm × 100 mm, 1.5 μm). After selection of the stationary phase, the method was further optimised by testing two aqueous and two organic phases and by adapting to a gradient method. The obtained method was fully validated based on its measurement uncertainty (accuracy profile) and robustness tests. A UHPLC method was obtained for the identification and quantification of folic acid in pharmaceutical preparations, which will cut analysis times and solvent consumption. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Development and Validation of Videotaped Scenarios

    PubMed Central

    Noel, Nora E.; Maisto, Stephen A.; Johnson, James D.; Jackson, Lee A.; Goings, Christopher D.; Hagman, Brett T.

    2013-01-01

    Researchers using scenarios often neglect to validate perceived content and salience of embedded stimuli specifically with intended participants, even when such meaning is integral to the study. For example, sex and aggression stimuli are heavily influenced by culture, so participants may not perceive what researchers intended in sexual aggression scenarios. Using four studies, the authors describe the method of scenario validation to produce two videos assessing alcohol-related sexual aggression. Both videos are identical except for the presence in one video of antiforce cues that are extremely salient to the young heterosexual men. Focus groups and questionnaires validate these men's perceptions that (a) the woman was sexually interested, (b) the sexual cues were salient, (c) the antiforce cues were salient (antiaggression video only), and (e) these antiforce cues inhibited acceptance of forced sex. Results show the value of carefully selecting and validating content when assessing socially volatile variables and provide a useful template for developing culturally valid scenarios. PMID:18252938

  7. Validation of a method for the quantitation of ghrelin and unacylated ghrelin by HPLC.

    PubMed

    Staes, Edith; Rozet, Eric; Ucakar, Bernard; Hubert, Philippe; Préat, Véronique

    2010-02-05

    An HPLC/UV method was first optimized for the separation and quantitation of human acylated and unacylated (or des-acyl) ghrelin from aqueous solutions. This method was validated by an original approach using accuracy profiles based on tolerance intervals for the total error measurement. The concentration range that achieved adequate accuracy extended from 1.85 to 59.30microM and 1.93 to 61.60microM for acylated and unacylated ghrelin, respectively. Then, optimal temperature, pH and buffer for sample storage were determined. Unacylated ghrelin was found to be stable in all conditions tested. At 37 degrees C acylated ghrelin was stable at pH 4 but unstable at pH 7.4, the main degradation product was unacylated ghrelin. Finally, this validated HPLC/UV method was used to evaluate the binding of acylated and unacylated ghrelin to liposomes.

  8. Mixed Methods in Prostate Cancer Prevention and Service Utilization Planning: Combining Focus Groups, Survey Research, and Community Engagement.

    PubMed

    Tataw, David Besong; Ekúndayò, Olúgbémiga T

    2017-01-01

    This article reports on the use of sequential and integrated mixed-methods approach in a focused population and small-area analysis. The study framework integrates focus groups, survey research, and community engagement strategies in a search for evidence related to prostate cancer screening services utilization as a component of cancer prevention planning in a marginalized African American community in the United States. Research and data analysis methods are synthesized by aggregation, configuration, and interpretive analysis. The results of synthesis show that qualitative and quantitative data validate and complement each other in advancing our knowledge of population characteristics, variable associations, the complex context in which variables exist, and the best options for prevention and service planning. Synthesis of findings and interpretive analysis provided two important explanations which seemed inexplicable in regression outputs: (a) Focus group data on the limitations of the church as an educational source explain the negative association between preferred educational channels and screening behavior found in quantitative analysis. (b) Focus group data on unwelcoming provider environments explain the inconsistent relationship between knowledge of local sites and screening services utilization found in quantitative analysis. The findings suggest that planners, evaluators, and scientists should grow their planning and evaluation evidence from the community they serve.

  9. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    PubMed

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  10. Validity Argument for Assessing L2 Pragmatics in Interaction Using Mixed Methods

    ERIC Educational Resources Information Center

    Youn, Soo Jung

    2015-01-01

    This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate…

  11. Validated reversed phase LC method for quantitative analysis of polymethoxyflavones in citrus peel extracts.

    PubMed

    Wang, Zhenyu; Li, Shiming; Ferguson, Stephen; Goodnow, Robert; Ho, Chi-Tang

    2008-01-01

    Polymethoxyflavones (PMFs), which exist exclusively in the citrus genus, have biological activities including anti-inflammatory, anticarcinogenic, and antiatherogenic properties. A validated RPLC method was developed for quantitative analysis of six major PMFs, namely nobiletin, tangeretin, sinensetin, 5,6,7,4'-tetramethoxyflavone, 3,5,6,7,3',4'-hexamethoxyflavone, and 3,5,6,7,8,3',4'-heptamethoxyflavone. The polar embedded LC stationary phase was able to fully resolve the six analogues. The developed method was fully validated in terms of linearity, accuracy, precision, sensitivity, and system suitability. The LOD of the method was calculated as 0.15 microg/mL and the recovery rate was between 97.0 and 105.1%. This analytical method was successfully applied to quantify the individual PMFs in four commercially available citrus peel extracts (CPEs). Each extract shows significant difference in the PMF composition and concentration. This method may provide a simple, rapid, and reliable tool to help reveal the correlation between the bioactivity of the PMF extracts and the individual PMF content.

  12. Validation and Verification (V and V) Testing on Midscale Flame Resistant (FR) Test Method

    DTIC Science & Technology

    2016-12-16

    Method for Evaluation of Flame Resistant Clothing for Protection against Fire Simulations Using an Instrumented Manikin. Validation and...complement (not replace) the capabilities of the ASTM F1930 Standard Test Method for Evaluation of Flame Resistant Clothing for Protection against Fire ...Engineering Center (NSRDEC) to complement the ASTM F1930 Standard Test Method for Evaluation of Flame Resistant Clothing for Protection against Fire

  13. [Validation Study for Analytical Method of Diarrhetic Shellfish Poisons in 9 Kinds of Shellfish].

    PubMed

    Yamaguchi, Mizuka; Yamaguchi, Takahiro; Kakimoto, Kensaku; Nagayoshi, Haruna; Okihashi, Masahiro; Kajimura, Keiji

    2016-01-01

    A method was developed for the simultaneous determination of okadaic acid, dinophysistoxin-1 and dinophysistoxin-2 in shellfish using ultra performance liquid chromatography with tandem mass spectrometry. Shellfish poisons in spiked samples were extracted with methanol and 90% methanol, and were hydrolyzed with 2.5 mol/L sodium hydroxide solution. Purification was done on an HLB solid-phase extraction column. This method was validated in accordance with the notification of Ministry of Health, Labour and Welfare of Japan. As a result of the validation study in nine kinds of shellfish, the trueness, repeatability and within-laboratory reproducibility were 79-101%, less than 12 and 16%, respectively. The trueness and precision met the target values of notification.

  14. Validity of the remote food photography method against doubly labeled water among minority preschoolers

    USDA-ARS?s Scientific Manuscript database

    The aim of this study was to determine the validity of energy intake (EI) estimations made using the remote food photography method (RFPM) compared to the doubly labeled water (DLW) method in minority preschool children in a free-living environment. Seven days of food intake and spot urine samples...

  15. Validation of the Abdominal Pain Index using a revised scoring method.

    PubMed

    Laird, Kelsey T; Sherman, Amanda L; Smith, Craig A; Walker, Lynn S

    2015-06-01

    Evaluate the psychometric properties of child- and parent-report versions of the four-item Abdominal Pain Index (API) in children with functional abdominal pain (FAP) and healthy controls, using a revised scoring method that facilitates comparisons of scores across samples and time. Pediatric patients aged 8-18 years with FAP and controls completed the API at baseline (N = 1,967); a subset of their parents (N = 290) completed the API regarding the child's pain. Subsets of patients completed follow-up assessments at 2 weeks (N = 231), 3 months (N = 330), and 6 months (N = 107). Subsets of both patients (N = 389) and healthy controls (N = 172) completed a long-term follow-up assessment (mean age at follow-up = 20.21 years, SD = 3.75). The API demonstrated good concurrent, discriminant, and construct validity, as well as good internal consistency. We conclude that the API, using the revised scoring method, is a useful, reliable, and valid measure of abdominal pain severity. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.

    PubMed

    Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît

    2017-01-01

    To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV <1.0%. These results suggest that the simple method presented here is valid and reliable for computing CMJ force, velocity, power, and F-v profiles in athletes and could be used in practice under field conditions when body mass, push-off distance, and jump height are known.

  17. Establishing the Reliability and Validity of a Computerized Assessment of Children's Working Memory for Use in Group Settings

    ERIC Educational Resources Information Center

    St Clair-Thompson, Helen

    2014-01-01

    The aim of the present study was to investigate the reliability and validity of a brief standardized assessment of children's working memory; "Lucid Recall." Although there are many established assessments of working memory, "Lucid Recall" is fully automated and can therefore be administered in a group setting. It is therefore…

  18. The Factorial Validity of The Maslach Burnout Inventory--General Survey in Representative Samples of Eight Different Occupational Groups

    ERIC Educational Resources Information Center

    Langballe, Ellen Melbye; Falkum, Erik; Innstrand, Siw Tone; Aasland, Olaf Gjerlow

    2006-01-01

    The Maslach Burnout Inventory--General Survey (MBI-GS) is designed to measure the three subdimensions (exhaustion, cynicism, and professional efficacy) of burnout in a wide range of occupations. This article examines the factorial validity of the MBI-GS across eight different occupational groups in Norway: lawyers, physicians, nurses, teachers,…

  19. Validation approach for a fast and simple targeted screening method for 75 antibiotics in meat and aquaculture products using LC-MS/MS.

    PubMed

    Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique

    2017-04-01

    An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.

  20. Validation and Clinical Evaluation of a Novel Method To Measure Miltefosine in Leishmaniasis Patients Using Dried Blood Spot Sample Collection

    PubMed Central

    Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.

    2016-01-01

    To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691

  1. Validity and relative validity of a novel digital approach for 24-h dietary recall in athletes.

    PubMed

    Baker, Lindsay B; Heaton, Lisa E; Stein, Kimberly W; Nuccio, Ryan P; Jeukendrup, Asker E

    2014-04-30

    We developed a digital dietary analysis tool for athletes (DATA) using a modified 24-h recall method and an integrated, customized nutrient database. The purpose of this study was to assess DATA's validity and relative validity by measuring its agreement with registered dietitians' (RDs) direct observations (OBSERVATION) and 24-h dietary recall interviews using the USDA 5-step multiple-pass method (INTERVIEW), respectively. Fifty-six athletes (14-20 y) completed DATA and INTERVIEW in randomized counter-balanced order. OBSERVATION (n = 26) consisted of RDs recording participants' food/drink intake in a 24-h period and were completed the day prior to DATA and INTERVIEW. Agreement among methods was estimated using a repeated measures t-test and Bland-Altman analysis. The paired differences (with 95% confidence intervals) between DATA and OBSERVATION were not significant for carbohydrate (10.1%, -1.2-22.7%) and protein (14.1%, -3.2-34.5%) but was significant for energy (14.4%, 1.2-29.3%). There were no differences between DATA and INTERVIEW for energy (-1.1%, -9.1-7.7%), carbohydrate (0.2%, -7.1-8.0%) or protein (-2.7%, -11.3-6.7%). Bland-Altman analysis indicated significant positive correlations between absolute values of the differences and the means for OBSERVATION vs. DATA (r = 0.40 and r = 0.47 for energy and carbohydrate, respectively) and INTERVIEW vs. DATA (r = 0.52, r = 0.29, and r = 0.61 for energy, carbohydrate, and protein, respectively). There were also wide 95% limits of agreement (LOA) for most method comparisons. The mean bias ratio (with 95% LOA) for OBSERVATION vs. DATA was 0.874 (0.551-1.385) for energy, 0.906 (0.522-1.575) for carbohydrate, and 0.895(0.395-2.031) for protein. The mean bias ratio (with 95% LOA) for INTERVIEW vs. DATA was 1.016 (0.538-1.919) for energy, 0.995 (0.563-1.757) for carbohydrate, and 1.031 (0.514-2.068) for protein. DATA has good relative validity for group-level comparisons in athletes. However, there are large variations

  2. E-learning for Critical Thinking: Using Nominal Focus Group Method to Inform Software Content and Design

    PubMed Central

    Parker, Steve; Mayner, Lidia; Michael Gillham, David

    2015-01-01

    Background: Undergraduate nursing students are often confused by multiple understandings of critical thinking. In response to this situation, the Critiique for critical thinking (CCT) project was implemented to provide consistent structured guidance about critical thinking. Objectives: This paper introduces Critiique software, describes initial validation of the content of this critical thinking tool and explores wider applications of the Critiique software. Materials and Methods: Critiique is flexible, authorable software that guides students step-by-step through critical appraisal of research papers. The spelling of Critiique was deliberate, so as to acquire a unique web domain name and associated logo. The CCT project involved implementation of a modified nominal focus group process with academic staff working together to establish common understandings of critical thinking. Previous work established a consensus about critical thinking in nursing and provided a starting point for the focus groups. The study was conducted at an Australian university campus with the focus group guided by open ended questions. Results: Focus group data established categories of content that academic staff identified as important for teaching critical thinking. This emerging focus group data was then used to inform modification of Critiique software so that students had access to consistent and structured guidance in relation to critical thinking and critical appraisal. Conclusions: The project succeeded in using focus group data from academics to inform software development while at the same time retaining the benefits of broader philosophical dimensions of critical thinking. PMID:26835469

  3. Criterion validity and clinical usefulness of Attention Deficit Hyperactivity Disorder Rating Scale IV in attention deficit hyperactivity disorder (ADHD) as a function of method and age.

    PubMed

    López-Villalobos, José A; Andrés-De Llano, Jesús; López-Sánchez, María V; Rodríguez-Molinero, Luis; Garrido-Redondo, Mercedes; Sacristán-Martín, Ana M; Martínez-Rivera, María T; Alberola-López, Susana

    2017-02-01

    The aim of this research is to analyze Attention Deficit Hyperactivity Disorder Rating Scales IV (ADHD RS-IV) criteria validity and its clinical usefulness for the assessment of Attention Deficit Hyperactivity Disorder (ADHD) as a function of assessment method and age. A sample was obtained from an epidemiological study (n = 1095, 6-16 years). Clinical cases of ADHD  (ADHD-CL) were selected by dimensional ADHD RS-IV and later by clinical interview (DSM-IV). ADHD-CL cases were compared with four categorical results of ADHD RS-IV provided by parents (CATPA), teachers (CATPR), either parents or teachers (CATPAOPR) and both parents and teachers (CATPA&PR). Criterion validity and clinical usefulness of the answer modalities to ADHD RS-IV were studied. ADHD-CL rate was 6.9% in childhood, 6.2% in preadolescence and 6.9% in adolescence. Alternative methods to the clinical interview led to increased numbers of ADHD cases in all age groups analyzed, in the following sequence: CATPAOPR> CATPRO> CATPA> CATPA&PR> ADHD-CL. CATPA&PR was the procedure with the greatest validity, specificity and clinical usefulness in all three age groups, particularly in the childhood. Isolated use of ADHD RS-IV leads to an increase in ADHD cases compared to clinical interview, and varies depending on the procedure used.

  4. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  5. Validity of total and segmental impedance measurements for prediction of body composition across ethnic population groups.

    PubMed

    Deurenberg, P; Deurenberg-Yap, M; Schouten, F J M

    2002-03-01

    To test the impact of body build factors on the validity of impedance-based body composition predictions across (ethnic) population groups and to study the suitability of segmental impedance measurements. Cross-sectional observational study. Ministry of Health and School of Physical Education, Nanyang Technological University, Singapore. A total of 291 female and male Chinese, Malays and Indian Singaporeans, aged 18-69, body mass index (BMI) 16.0-40.2 kg/ m2. Anthropometric parameters were measured in addition to impedance (100 kHz) of the total body, arms and legs. Impedance indexes were calculated as height2/impedance. Arm length (span) and leg length (sitting height), wrist and knee width were measured from which body build indices were calculated. Total body water (TBW) was measured using deuterium oxide dilution. Extra cellular water (ECW) was measured using bromide dilution. Body fat percentage was determined using a chemical four-compartment model. The bias of TBW predicted from total body impedance index (bias: measured minus predicted TBW) was different among the three ethnic groups, TBW being significantly underestimated in Indians compared to Chinese and Malays. This bias was found to be dependent on body water distribution (ECW/TBW) and parameters of body build, mainly relative (to height) arm length. After correcting for differences in body water distribution and body build parameters the differences in bias across the ethnic groups disappeared. The impedance index using total body impedance was better correlated with TBW than the impedance index of arm or leg impedance, even after corrections for body build parameters. The study shows that ethnic-specific bias of impedance-based prediction formulas for body composition is due mainly to differences in body build among the ethnic groups. This means that the use of 'general' prediction equations across different (ethnic) population groups without prior testing of their validity should be avoided. Total

  6. Hierarchical semi-numeric method for pairwise fuzzy group decision making.

    PubMed

    Marimin, M; Umano, M; Hatono, I; Tamura, H

    2002-01-01

    Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.

  7. Analysis of Carbamate Pesticides: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS666

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less

  8. Analysis of Phosphonic Acids: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled Analysis of Diisopropyl Methylphosphonate, Ethyl Hydrogen Dimethylamidophosphate, Isopropyl Methylphosphonic Acid, Methylphosphonic Acid, and Pinacolyl Methylphosphonic Acid in Water by Multiple Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry: EPA Version MS999. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures describedmore » in EPA Method MS999 for analysis of the listed phosphonic acids and surrogates in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of EPA Method MS999 can be determined.« less

  9. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1989-01-01

    The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.

  10. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1988-01-01

    This final report describes the results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems. This was approached by developing a translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could effect the output from a set of rules.

  11. Independent validation of the prognostic capacity of the ISUP prostate cancer grade grouping system for radiation treated patients with long-term follow-up.

    PubMed

    Spratt, D E; Jackson, W C; Abugharib, A; Tomlins, S A; Dess, R T; Soni, P D; Lee, J Y; Zhao, S G; Cole, A I; Zumsteg, Z S; Sandler, H; Hamstra, D; Hearn, J W; Palapattu, G; Mehra, R; Morgan, T M; Feng, F Y

    2016-09-01

    There has been a recent proposal to change the grading system of prostate cancer into a five-tier grade grouping system. The prognostic impact of this has been demonstrated in regards only to biochemical recurrence-free survival (bRFS) with short follow-up (3 years). Between 1990 and 2013, 847 consecutive men were treated with definitive external beam radiation therapy at a single academic center. To validate the new grade grouping system, bRFS, distant metastases-free survival (DMFS) and prostate cancer-specific survival (PCSS) were calculated. Adjusted Kaplan-Meier and multivariable Cox regression analyses were performed to assess the independent impact of the new grade grouping system. Discriminatory analyses were performed to compare the commonly used three-tier Gleason score system (6, 7 and 8-10) to the new system. The median follow-up of our cohort was 88 months. The 5-grade groups independently validated differing risks of bRFS (group 1 as reference; adjusted hazard ratio (aHR) 1.35, 2.16, 1.79 and 3.84 for groups 2-5, respectively). Furthermore, a clear stratification was demonstrated for DMFS (aHR 2.03, 3.18, 3.62 and 13.77 for groups 2-5, respectively) and PCSS (aHR 3.00, 5.32, 6.02 and 39.02 for groups 2-5, respectively). The 5-grade group system had improved prognostic discrimination for all end points compared with the commonly used three-tiered system (that is, Gleason score 6, 7 and 8-10). In a large independent radiotherapy cohort with long-term follow-up, we have validated the bRFS benefit of the proposed five-tier grade grouping system. Furthermore, we have demonstrated that the system is highly prognostic for DMFS and PCSS. Grade group 5 had markedly worse outcomes for all end points, and future work is necessary to improve outcomes in these patients.

  12. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  13. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  14. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  15. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  16. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  17. State of the art in the validation of screening methods for the control of antibiotic residues: is there a need for further development?

    PubMed

    Gaudin, Valérie

    2017-09-01

    Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.

  18. Validity and reliability of a scale to measure genital body image.

    PubMed

    Zielinski, Ruth E; Kane-Low, Lisa; Miller, Janis M; Sampselle, Carolyn

    2012-01-01

    Women's body image dissatisfaction extends to body parts usually hidden from view--their genitals. Ability to measure genital body image is limited by lack of valid and reliable questionnaires. We subjected a previously developed questionnaire, the Genital Self Image Scale (GSIS) to psychometric testing using a variety of methods. Five experts determined the content validity of the scale. Then using four participant groups, factor analysis was performed to determine construct validity and to identify factors. Further construct validity was established using the contrasting groups approach. Internal consistency and test-retest reliability was determined. Twenty one of 29 items were considered content valid. Two items were added based on expert suggestions. Factor analysis was undertaken resulting in four factors, identified as Genital Confidence, Appeal, Function, and Comfort. The revised scale (GSIS-20) included 20 items explaining 59.4% of the variance. Women indicating an interest in genital cosmetic surgery exhibited significantly lower scores on the GSIS-20 than those who did not. The final 20 item scale exhibited internal reliability across all sample groups as well as test-retest reliability. The GSIS-20 provides a measure of genital body image demonstrating reliability and validity across several populations of women.

  19. Principles for valid histopathologic scoring in research

    PubMed Central

    Gibson-Corley, Katherine N.; Olivier, Alicia K.; Meyerholz, David K.

    2013-01-01

    Histopathologic scoring is a tool by which semi-quantitative data can be obtained from tissues. Initially, a thorough understanding of the experimental design, study objectives and methods are required to allow the pathologist to appropriately examine tissues and develop lesion scoring approaches. Many principles go into the development of a scoring system such as tissue examination, lesion identification, scoring definitions and consistency in interpretation. Masking (a.k.a. “blinding”) of the pathologist to experimental groups is often necessary to constrain bias and multiple mechanisms are available. Development of a tissue scoring system requires appreciation of the attributes and limitations of the data (e.g. nominal, ordinal, interval and ratio data) to be evaluated. Incidence, ordinal and rank methods of tissue scoring are demonstrated along with key principles for statistical analyses and reporting. Validation of a scoring system occurs through two principal measures: 1) validation of repeatability and 2) validation of tissue pathobiology. Understanding key principles of tissue scoring can help in the development and/or optimization of scoring systems so as to consistently yield meaningful and valid scoring data. PMID:23558974

  20. fMRI capture of auditory hallucinations: Validation of the two-steps method.

    PubMed

    Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud

    2017-10-01

    Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Products of composite operators in the exact renormalization group formalism

    NASA Astrophysics Data System (ADS)

    Pagani, C.; Sonoda, H.

    2018-02-01

    We discuss a general method of constructing the products of composite operators using the exact renormalization group formalism. Considering mainly the Wilson action at a generic fixed point of the renormalization group, we give an argument for the validity of short-distance expansions of operator products. We show how to compute the expansion coefficients by solving differential equations, and test our method with some simple examples.

  2. Evaluation of methyl methanesulfonate, 2,6-diaminotoluene and 5-fluorouracil: Part of the Japanese center for the validation of alternative methods (JaCVAM) international validation study of the in vivo rat alkaline comet assay.

    PubMed

    Plappert-Helbig, Ulla; Junker-Walker, Ursula; Martus, Hans-Joerg

    2015-07-01

    As a part of the Japanese Center for the Validation of Alternative Methods (JaCVAM)-initiative international validation study of the in vivo rat alkaline comet assay (comet assay), we examined methyl methanesulfonate, 2,6-diaminotoluene, and 5-fluorouracil under coded test conditions. Rats were treated orally with the maximum tolerated dose (MTD) and two additional descending doses of the respective compounds. In the MMS treated groups liver and stomach showed significantly elevated DNA damage at each dose level and a significant dose-response relationship. 2,6-diaminotoluene induced significantly elevated DNA damage in the liver at each dose and a statistically significant dose-response relationship whereas no DNA damage was obtained in the stomach. 5-fluorouracil did not induce DNA damage in either liver or stomach. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    PubMed

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  4. Actor groups, related needs, and challenges at the climate downscaling interface

    NASA Astrophysics Data System (ADS)

    Rössler, Ole; Benestad, Rasmus; Diamando, Vlachogannis; Heike, Hübener; Kanamaru, Hideki; Pagé, Christian; Margarida Cardoso, Rita; Soares, Pedro; Maraun, Douglas; Kreienkamp, Frank; Christodoulides, Paul; Fischer, Andreas; Szabo, Peter

    2016-04-01

    At the climate downscaling interface, numerous downscaling techniques and different philosophies compete on being the best method in their specific terms. Thereby, it remains unclear to what extent and for which purpose these downscaling techniques are valid or even the most appropriate choice. A common validation framework that compares all the different available methods was missing so far. The initiative VALUE closes this gap with such a common validation framework. An essential part of a validation framework for downscaling techniques is the definition of appropriate validation measures. The selection of validation measures should consider the needs of the stakeholder: some might need a temporal or spatial average of a certain variable, others might need temporal or spatial distributions of some variables, still others might need extremes for the variables of interest or even inter-variable dependencies. Hence, a close interaction of climate data providers and climate data users is necessary. Thus, the challenge in formulating a common validation framework mirrors also the challenges between the climate data providers and the impact assessment community. This poster elaborates the issues and challenges at the downscaling interface as it is seen within the VALUE community. It suggests three different actor groups: one group consisting of the climate data providers, the other two groups being climate data users (impact modellers and societal users). Hence, the downscaling interface faces classical transdisciplinary challenges. We depict a graphical illustration of actors involved and their interactions. In addition, we identified four different types of issues that need to be considered: i.e. data based, knowledge based, communication based, and structural issues. They all may, individually or jointly, hinder an optimal exchange of data and information between the actor groups at the downscaling interface. Finally, some possible ways to tackle these issues are

  5. Revision, Criterion Validity, and Multi-group Assessment of the Reactions to Homosexuality Scale

    PubMed Central

    Smolenski, Derek J.; Diamond, Pamela M.; Ross, Michael W.; Simon Rosser, B. R.

    2010-01-01

    Internalized homonegativity encompasses negative attitudes toward one’s own sexual orientation, and is associated with negative mental and physical health outcomes. The Reactions to Homosexuality scale (Ross & Rosser, 1996), an instrument used to measure internalized homonegativity, has been criticized for including content irrelevant to the construct of internalized homonegativity. We revised the scale using exploratory and confirmatory factor analyses, and identified a seven-item, three-factor reduced version that demonstrated measurement invariance across racial/ethnic categorizations and between English and Spanish versions. We also investigated criterion validity by estimating correlations with hypothesized outcomes associated with outness, relationship status, sexual orientation, and gay community affiliation. The evidence of measurement invariance suggests that this scale is appropriate for pluralistic treatment or study groups. PMID:20954058

  6. Validation of 15 kGy as a radiation sterilisation dose for bone allografts manufactured at the Queensland Bone Bank: application of the VDmax 15 method.

    PubMed

    Nguyen, Huynh; Morgan, David A F; Sly, Lindsay I; Benkovich, Morris; Cull, Sharon; Forwood, Mark R

    2008-06-01

    ISO 11137-2006 (ISO 11137-2a 2006) provides a VDmax 15 method for substantiation of 15 kGy as radiation sterilisation dose (RSD) for health care products with a relatively low sample requirement. Moreover, the method is also valid for products in which the bioburden level is less than or equal to 1.5. In the literature, the bioburden level of processed bone allografts is extremely low. Similarly, the Queensland Bone Bank (QBB) usually recovers no viable organisms from processed bone allografts. Because bone allografts are treated as a type of health care product, the aim of this research was to substantiate 15 kGy as a RSD for frozen bone allografts at the QBB using method VDmax 15-ISO 11137-2: 2006 (ISO 11137-2e, Procedure for method VDmax 15 for multiple production batches. Sterilisation of health care products - radiation - part 2: establishing the sterilisation dose, 2006; ISO 11137-2f, Procedure for method VDmax 15 for a single production batch. Sterilisation of health care products - radiation - part 2: establishing the sterilisation dose, 2006). 30 femoral heads, 40 milled bone allografts and 40 structural bone allografts manufactured according to QBB standard operating procedures were used. Estimated bioburdens for each bone allograft group were used to calculate the verification doses. Next, 10 samples per group were irradiated at the verification dose, sterility was tested and the number of positive tests of sterility recorded. If the number of positive samples was no more than 1, from the 10 tests carried out in each group, the verification was accepted and 15 kGy was substantiated as RSD for those bone allografts. The bioburdens in all three groups were 0, and therefore the verification doses were 0 kGy. Sterility tests of femoral heads and milled bones were all negative (no contamination), and there was one positive test of sterility in the structural bone allograft. Accordingly, the verification was accepted. Using the ISO validated protocol, VDmax 15

  7. Self consistency grouping: a stringent clustering method

    PubMed Central

    2012-01-01

    Background Numerous types of clustering like single linkage and K-means have been widely studied and applied to a variety of scientific problems. However, the existing methods are not readily applicable for the problems that demand high stringency. Methods Our method, self consistency grouping, i.e. SCG, yields clusters whose members are closer in rank to each other than to any member outside the cluster. We do not define a distance metric; we use the best known distance metric and presume that it measures the correct distance. SCG does not impose any restriction on the size or the number of the clusters that it finds. The boundaries of clusters are determined by the inconsistencies in the ranks. In addition to the direct implementation that finds the complete structure of the (sub)clusters we implemented two faster versions. The fastest version is guaranteed to find only the clusters that are not subclusters of any other clusters and the other version yields the same output as the direct implementation but does so more efficiently. Results Our tests have demonstrated that SCG yields very few false positives. This was accomplished by introducing errors in the distance measurement. Clustering of protein domain representatives by structural similarity showed that SCG could recover homologous groups with high precision. Conclusions SCG has potential for finding biological relationships under stringent conditions. PMID:23320864

  8. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  9. The effect of adopting new storage methods for extending product validity periods on manufacturers expected inventory costs.

    PubMed

    Chen, Po-Yu

    2014-01-01

    The validness of the expiration dates (validity period) that manufacturers provide on food product labels is a crucial food safety problem. Governments must study how to use their authority by implementing fair awards and punishments to prompt manufacturers into adopting rigorous considerations, such as the effect of adopting new storage methods for extending product validity periods on expected costs. Assuming that a manufacturer sells fresh food or drugs, this manufacturer must respond to current stochastic demands at each unit of time to determine the purchase amount of products for sale. If this decision maker is capable and an opportunity arises, new packaging methods (e.g., aluminum foil packaging, vacuum packaging, high-temperature sterilization after glass packaging, or packaging with various degrees of dryness) or storage methods (i.e., adding desiccants or various antioxidants) can be chosen to extend the validity periods of products. To minimize expected costs, this decision maker must be aware of the processing costs of new storage methods, inventory standards, inventory cycle lengths, and changes in relationships between factors such as stochastic demand functions in a cycle. Based on these changes in relationships, this study established a mathematical model as a basis for discussing the aforementioned topics.

  10. Construct Validity and Scoring Methods of the World Health Organization: Health and Work Performance Questionnaire Among Workers With Arthritis and Rheumatological Conditions.

    PubMed

    AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J

    2017-06-01

    To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.

  11. Validation of rapid descriptive sensory methods against conventional descriptive analyses: A systematic review.

    PubMed

    Aguiar, Lorena Andrade de; Melo, Lauro; de Lacerda de Oliveira, Lívia

    2018-04-03

    A major drawback of conventional descriptive profile (CDP) in sensory evaluation is the long time spent in panel training. Rapid descriptive methods (RDM) have increased significantly. Some of them have been compared with CDP for validation. In Health Sciences, systematic reviews (SR) are performed to evaluate validation of diagnostic tests in relation to a gold standard method. SR present a well-defined protocol to summarize research evidence and to evaluate the quality of the studies with determined criteria. We adapted SR protocol to evaluate the validation of RDM against CDP as satisfactory procedures to obtain food characterization. We used "Population Intervention Comparison Outcome Study - PICOS" framework to design the research in which "Population" was food/ beverages; "intervention" were RDM, "Comparison" was CDP as gold standard, "Outcome" was the ability of RDM to generate similar descriptive profiles in comparison with CDP and "Studies" was sensory descriptive analyses. The proportion of studies concluding for similarity of the RDM with CDP ranged from 0% to 100%. Low and moderate risk of bias were reached by 87% and 13% of the studies, respectively, supporting the conclusions of SR. RDM with semi-trained assessors and evaluation of individual attributes presented higher percentages of concordance with CDP.

  12. A Validity Study of the Working Group's Autobiographical Memory Test for Individuals with Moderate to Severe Intellectual Disability

    ERIC Educational Resources Information Center

    Pyo, Geunyeong; Ala, Tom; Kyrouac, Gregory A.; Verhulst, Steven J.

    2011-01-01

    The purpose of the present study was to investigate the validity of the Working Group's Autobiographical Memory Test as a dementia screening tool for individuals with moderate to severe intellectual disabilities (ID). Twenty-one participants with Dementia of Alzheimer's Type (DAT) and moderate to severe ID and 42 controls with similar levels of ID…

  13. A Validated Method for the Quality Control of Andrographis paniculata Preparations.

    PubMed

    Karioti, Anastasia; Timoteo, Patricia; Bergonzi, Maria Camilla; Bilia, Anna Rita

    2017-10-01

    Andrographis paniculata is a herbal drug of Asian traditional medicine largely employed for the treatment of several diseases. Recently, it has been introduced in Europe for the prophylactic and symptomatic treatment of common cold and as an ingredient of dietary supplements. The active principles are diterpenes with andrographolide as the main representative. In the present study, an analytical protocol was developed for the determination of the main constituents in the herb and preparations of A. paniculata . Three different extraction protocols (methanol extraction using a modified Soxhlet procedure, maceration under ultrasonication, and decoction) were tested. Ultrasonication achieved the highest content of analytes. HPLC conditions were optimized in terms of solvent mixtures, time course, and temperature. A reversed phase C18 column eluted with a gradient system consisting of acetonitrile and acidified water and including an isocratic step at 30 °C was used. The HPLC method was validated for linearity, limits of quantitation and detection, repeatability, precision, and accuracy. The overall method was validated for precision and accuracy over at least three different concentration levels. Relative standard deviation was less than 1.13%, whereas recovery was between 95.50% and 97.19%. The method also proved to be suitable for the determination of a large number of commercial samples and was proposed to the European Pharmacopoeia for the quality control of Andrographidis herba. Georg Thieme Verlag KG Stuttgart · New York.

  14. Validation of a two-dimensional liquid chromatography method for quality control testing of pharmaceutical materials.

    PubMed

    Yang, Samuel H; Wang, Jenny; Zhang, Kelly

    2017-04-07

    Despite the advantages of 2D-LC, there is currently little to no work in demonstrating the suitability of these 2D-LC methods for use in a quality control (QC) environment for good manufacturing practice (GMP) tests. This lack of information becomes more critical as the availability of commercial 2D-LC instrumentation has significantly increased, and more testing facilities begin to acquire these 2D-LC capabilities. It is increasingly important that the transferability of developed 2D-LC methods be assessed in terms of reproducibility, robustness and performance across different laboratories worldwide. The work presented here focuses on the evaluation of a heart-cutting 2D-LC method used for the analysis of a pharmaceutical material, where a key, co-eluting impurity in the first dimension ( 1 D) is resolved from the main peak and analyzed in the second dimension ( 2 D). A design-of-experiments (DOE) approach was taken in the collection of the data, and the results were then modeled in order to evaluate method robustness using statistical modeling software. This quality by design (QBD) approach gives a deeper understanding of the impact of these 2D-LC critical method attributes (CMAs) and how they affect overall method performance. Although there are multiple parameters that may be critical from method development point of view, a special focus of this work is devoted towards evaluation of unique 2D-LC critical method attributes from method validation perspective that transcend conventional method development and validation. The 2D-LC method attributes are evaluated for their recovery, peak shape, and resolution of the two co-eluting compounds in question on the 2 D. In the method, linearity, accuracy, precision, repeatability, and sensitivity are assessed along with day-to-day, analyst-to-analyst, and lab-to-lab (instrument-to-instrument) assessments. The results of this validation study demonstrate that the 2D-LC method is accurate, sensitive, and robust and is

  15. Validation methodology in publications describing epidemiological registration methods of dental caries: a systematic review.

    PubMed

    Sjögren, P; Ordell, S; Halling, A

    2003-12-01

    The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.

  16. A New Method to Cross Calibrate and Validate TOMS, SBUV/2, and SCIAMACHY Measurements

    NASA Technical Reports Server (NTRS)

    Ahmad, Ziauddin; Hilsenrath, Ernest; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A unique method to validate back scattered ultraviolet (buv) type satellite data that complements the measurements from existing ground networks is proposed. The method involves comparing the zenith sky radiance measurements from the ground to the nadir radiance measurements taken from space. Since the measurements are compared directly, the proposed method is superior to any other method that involves comparing derived products (for example, ozone), because comparison of derived products involve inversion algorithms which are susceptible to several type of errors. Forward radiative transfer (RT) calculations show that for an aerosol free atmosphere, the ground-based zenith sky radiance measurement and the satellite nadir radiance measurements can be predicted with an accuracy of better than 1 percent. The RT computations also show that for certain values of the solar zenith angles, the radiance comparisons could be better than half a percent. This accuracy is practically independent of ozone amount and aerosols in the atmosphere. Experiences with the Shuttle Solar Backscatter Ultraviolet (SSBUV) program show that the accuracy of the ground-based zenith sky radiance measuring instrument can be maintained at a level of a few tenth of a percent. This implies that the zenith sky radiance measurements can be used to validate Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet (SBUV/2), and The SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) radiance data. Also, this method will help improve the long term precision of the measurements for better trend detection and the accuracy of other BUV products such as tropospheric ozone and aerosols. Finally, in the long term, this method is a good candidate to inter-calibrate and validate long term observations of upcoming operational instruments such as Global Ozone Monitoring Experiment (GOME-2), Ozone Mapping Instrument (OMI), Ozone Dynamics Ultraviolet Spectrometer (ODUS

  17. Development and validation of RP HPLC method to determine nandrolone phenylpropionate in different pharmaceutical formulations.

    PubMed

    Mukherjee, Jayanti; Das, Ayan; Chakrabarty, Uday Sankar; Sahoo, Bijay Kumar; Dey, Goutam; Choudhury, Hira; Pal, Tapan Kumar

    2011-01-01

    This study describes development and subsequent validation of a reversed phase high performance liquid chromatographic (RP-HPLC) method for the estimation of nandrolone phenylpropionate, an anabolic steroid, in bulk drug, in conventional parenteral dosage formulation and in prepared nanoparticle dosage form. The chromatographic system consisted of a Luna Phenomenex, CN (250 mm x 4.6 mm, 5 microm) column, an isocratic mobile phase comprising 10 mM phosphate buffer and acetonitrile (50:50, v/v) and UV detection at 240 nm. Nandrolone phenylpropionate was eluted about 6.3 min with no interfering peaks of excipients used for the preparation of dosage forms. The method was linear over the range from 0.050 to 25 microg/mL in raw drug (r2 = 0.9994). The intra-day and inter-day precision values were in the range of 0.219-0.609% and 0.441-0.875%, respectively. Limits of detection and quantitation were 0.010 microg/mL and 0.050 microg/mL, respectively. The results were validated according to International Conference on Harmonization (ICH) guidelines in parenteral and prepared nanoparticle formulation. The validated HPLC method is simple, sensitive, precise, accurate and reproducible.

  18. Validity and relative validity of a novel digital approach for 24-h dietary recall in athletes

    PubMed Central

    2014-01-01

    validity for group-level comparisons in athletes. However, there are large variations in the relative validity of individuals’ dietary intake estimates from DATA, particularly in athletes with higher energy and nutrient intakes. DATA can be a useful athlete-specific, digital alternative to conventional 24-h dietary recall methods at the group level. Further development and testing is needed to improve DATA’s validity for estimations of individual dietary intakes. PMID:24779565

  19. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and Californiamore » to compare modeled results to actual on-the-ground measurements.« less

  20. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol

    PubMed Central

    Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-01-01

    Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated

  1. Validation of quantitative and qualitative methods for detecting allergenic ingredients in processed foods in Japan.

    PubMed

    Sakai, Shinobu; Adachi, Reiko; Akiyama, Hiroshi; Teshima, Reiko

    2013-06-19

    A labeling system for food allergenic ingredients was established in Japan in April 2002. To monitor the labeling, the Japanese government announced official methods for detecting allergens in processed foods in November 2002. The official methods consist of quantitative screening tests using enzyme-linked immunosorbent assays (ELISAs) and qualitative confirmation tests using Western blotting or polymerase chain reactions (PCR). In addition, the Japanese government designated 10 μg protein/g food (the corresponding allergenic ingredient soluble protein weight/food weight), determined by ELISA, as the labeling threshold. To standardize the official methods, the criteria for the validation protocol were described in the official guidelines. This paper, which was presented at the Advances in Food Allergen Detection Symposium, ACS National Meeting and Expo, San Diego, CA, Spring 2012, describes the validation protocol outlined in the official Japanese guidelines, the results of interlaboratory studies for the quantitative detection method (ELISA for crustacean proteins) and the qualitative detection method (PCR for shrimp and crab DNAs), and the reliability of the detection methods.

  2. Experimental comparison and validation of hot-ball method with guarded hot plate method on polyurethane foams

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Glorieux, Christ; Dieška, Peter; Kubičár, Ľudovít

    2016-07-01

    The Hot-ball method is an innovative transient method for measuring thermophysical properties. The principle is based on heating of a small ball, incorporated in measured medium, by constant heating power and simultaneous measuring of the ball's temperature response since the heating was initiated. The shape of the temperature response depends on thermophysical properties of the medium, where the sensor is placed. This method is patented by Institute of Physics, SAS, where the method and sensors based on this method are being developed. At the beginning of the development of sensors for this method we were oriented on monitoring applications, where relative precision is much more important than accuracy. Meanwhile, the quality of sensors was improved good enough to be used for a new application - absolute measuring of thermophysical parameters of low thermally conductive materials. This paper describes experimental verification and validation of measurement by hot-ball method. Thanks to cooperation with Laboratory of Soft Matter and Biophysics of Catholic University of Leuven in Belgium, established Guarded Hot Plate method was used as a reference. Details about measuring setups, description of the experiments and results of the comparison are presented.

  3. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    PubMed Central

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Results Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Conclusions Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication. PMID:19212468

  4. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  5. [Validation of measurement methods and estimation of uncertainty of measurement of chemical agents in the air at workstations].

    PubMed

    Dobecki, Marek

    2012-01-01

    This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.

  6. Establishing high resolution melting analysis: method validation and evaluation for c-RET proto-oncogene mutation screening.

    PubMed

    Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina

    2011-10-06

    Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.

  7. Content and factor validation of the Sieloff-King-Friend Assessment of Group Empowerment within Educational Organizations.

    PubMed

    Friend, Mary Louanne; Sieloff, Christina Leibold; Murphy, Shannon; Leeper, James

    2016-07-01

    Nursing education programs have responsibilities to their stakeholders to prepare graduates who can provide safe, effective patient centered care while leading health care changes. Empowered nurses have been associated with low nurse turnover and higher patient satisfaction; however, less is currently known about group empowerment in nursing education. In order to examine group empowerment in schools of nursing, the Sieloff-King Assessment of Group Empowerment in Organizations (SKAGEO©) was adapted and tested for content validity and confirmatory factor analysis. The adapted instrument, the Sieloff-King-Friend Assessment of Group Empowerment within Educational Organizations (SKFAGEEO) was first reviewed by nurse experts who provided quantitative and qualitative data regarding each item. A total of 320 nurse deans and faculty comprised the final sample for the second order confirmatory 8 factor analysis. Findings revealed factor loadings ranging from .455 to .960.The overall fit of the propose model was Chi Square=1383. 24, df=566, p<.001; GFI=.786, RMSEA=0.69. The study results indicated that the SKFAGEEO has acceptable psychometric properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Drug Target Validation Methods in Malaria - Protein Interference Assay (PIA) as a Tool for Highly Specific Drug Target Validation.

    PubMed

    Meissner, Kamila A; Lunev, Sergey; Wang, Yuan-Ze; Linzke, Marleen; de Assis Batista, Fernando; Wrenger, Carsten; Groves, Matthew R

    2017-01-01

    The validation of drug targets in malaria and other human diseases remains a highly difficult and laborious process. In the vast majority of cases, highly specific small molecule tools to inhibit a proteins function in vivo are simply not available. Additionally, the use of genetic tools in the analysis of malarial pathways is challenging. These issues result in difficulties in specifically modulating a hypothetical drug target's function in vivo. The current "toolbox" of various methods and techniques to identify a protein's function in vivo remains very limited and there is a pressing need for expansion. New approaches are urgently required to support target validation in the drug discovery process. Oligomerisation is the natural assembly of multiple copies of a single protein into one object and this self-assembly is present in more than half of all protein structures. Thus, oligomerisation plays a central role in the generation of functional biomolecules. A key feature of oligomerisation is that the oligomeric interfaces between the individual parts of the final assembly are highly specific. However, these interfaces have not yet been systematically explored or exploited to dissect biochemical pathways in vivo. This mini review will describe the current state of the antimalarial toolset as well as the potentially druggable malarial pathways. A specific focus is drawn to the initial efforts to exploit oligomerisation surfaces in drug target validation. As alternative to the conventional methods, Protein Interference Assay (PIA) can be used for specific distortion of the target protein function and pathway assessment in vivo. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Validated flow-injection method for rapid aluminium determination in anti-perspirants.

    PubMed

    López-Gonzálvez, A; Ruiz, M A; Barbas, C

    2008-09-29

    A flow-injection (FI) method for the rapid determination of aluminium in anti-perspirants has been developed. The method is based on the spectrophotometric detection at 535nm of the complex formed between Al ions and the chromogenic reagent eriochrome cyanine R. Both the batch and FI methods were validated by checking the parameters included in the ISO-3543-1 regulation. Variables involved in the FI method were optimized by using appropriate statistical tools. The method does not exhibit interference from other substances present in anti-perspirants and it shows a high precision with a R.S.D. value (n=6) of 0.9%. Moreover, the accuracy of the method was evaluated by comparison with a back complexometric titration method, which is currently used for routine analysis in pharmaceutical laboratories. The Student's t-test showed that the results obtained by both methods were not significantly different for a significance level of 95%. A response time of 12s and a sample analysis time, by performing triplicate injections, of 60s were achieved. The analytical figures of merit make the method highly appropriate to substitute the time-consuming complexometric method for this kind of analysis.

  10. Validated method for the analysis of goji berry, a rich source of zeaxanthin dipalmitate.

    PubMed

    Karioti, Anastasia; Bergonzi, Maria Camilla; Vincieri, Franco F; Bilia, Anna Rita

    2014-12-31

    In the present study an HPLC-DAD method was developed for the determination of the main carotenoid, zeaxanthin dipalmitate, in the fruits of Lycium barbarum. The aim was to develop and optimize an extraction protocol to allow fast, exhaustive, and repeatable extraction, suitable for labile carotenoid content. Use of liquid N2 allowed the grinding of the fruit. A step of ultrasonication with water removed efficiently the polysaccharides and enabled the exhaustive extraction of carotenoids by hexane/acetone 50:50. The assay was fast and simple and permitted the quality control of a large number of commercial samples including fruits, juices, and a jam. The HPLC method was validated according to ICH guidelines and satisfied the requirements. Finally, the overall method was validated for precision (% RSD ranging between 3.81 and 4.13) and accuracy at three concentration levels. The recovery was between 94 and 107% with RSD values <2%, within the acceptable limits, especially if the difficulty of the matrix is taken into consideration.

  11. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  12. A statistical method (cross-validation) for bone loss region detection after spaceflight

    PubMed Central

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  13. Bayesian Group Bridge for Bi-level Variable Selection.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  14. Ecological content validation of the Information Assessment Method for parents (IAM-parent): A mixed methods study.

    PubMed

    Bujold, M; El Sherif, R; Bush, P L; Johnson-Lafleur, J; Doray, G; Pluye, P

    2018-02-01

    This mixed methods study content validated the Information Assessment Method for parents (IAM-parent) that allows users to systematically rate and comment on online parenting information. Quantitative data and results: 22,407 IAM ratings were collected; of the initial 32 items, descriptive statistics showed that 10 had low relevance. Qualitative data and results: IAM-based comments were collected, and 20 IAM users were interviewed (maximum variation sample); the qualitative data analysis assessed the representativeness of IAM items, and identified items with problematic wording. Researchers, the program director, and Web editors integrated quantitative and qualitative results, which led to a shorter and clearer IAM-parent. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. A validated UHPLC-tandem mass spectrometry method for quantitative analysis of flavonolignans in milk thistle (Silybum marianum) extracts.

    PubMed

    Graf, Tyler N; Cech, Nadja B; Polyak, Stephen J; Oberlies, Nicholas H

    2016-07-15

    Validated methods are needed for the analysis of natural product secondary metabolites. These methods are particularly important to translate in vitro observations to in vivo studies. Herein, a method is reported for the analysis of the key secondary metabolites, a series of flavonolignans and a flavonoid, from an extract prepared from the seeds of milk thistle [Silybum marianum (L.) Gaertn. (Asteraceae)]. This report represents the first UHPLC MS-MS method validated for quantitative analysis of these compounds. The method takes advantage of the excellent resolution achievable with UHPLC to provide a complete analysis in less than 7min. The method is validated using both UV and MS detectors, making it applicable in laboratories with different types of analytical instrumentation available. Lower limits of quantitation achieved with this method range from 0.0400μM to 0.160μM with UV and from 0.0800μM to 0.160μM with MS. The new method is employed to evaluate variability in constituent composition in various commercial S. marianum extracts, and to show that storage of the milk thistle compounds in DMSO leads to degradation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A systematic review of validated methods to capture acute bronchospasm using administrative or claims data.

    PubMed

    Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A method for studying decision-making by guideline development groups.

    PubMed

    Gardner, Benjamin; Davidson, Rosemary; McAteer, John; Michie, Susan

    2009-08-05

    Multidisciplinary guideline development groups (GDGs) have considerable influence on UK healthcare policy and practice, but previous research suggests that research evidence is a variable influence on GDG recommendations. The Evidence into Recommendations (EiR) study has been set up to document social-psychological influences on GDG decision-making. In this paper we aim to evaluate the relevance of existing qualitative methodologies to the EiR study, and to develop a method best-suited to capturing influences on GDG decision-making. A research team comprised of three postdoctoral research fellows and a multidisciplinary steering group assessed the utility of extant qualitative methodologies for coding verbatim GDG meeting transcripts and semi-structured interviews with GDG members. A unique configuration of techniques was developed to permit data reduction and analysis. Our method incorporates techniques from thematic analysis, grounded theory analysis, content analysis, and framework analysis. Thematic analysis of individual interviews conducted with group members at the start and end of the GDG process defines discrete problem areas to guide data extraction from GDG meeting transcripts. Data excerpts are coded both inductively and deductively, using concepts taken from theories of decision-making, social influence and group processes. These codes inform a framework analysis to describe and explain incidents within GDG meetings. We illustrate the application of the method by discussing some preliminary findings of a study of a National Institute for Health and Clinical Excellence (NICE) acute physical health GDG. This method is currently being applied to study the meetings of three of NICE GDGs. These cover topics in acute physical health, mental health and public health, and comprise a total of 45 full-day meetings. The method offers potential for application to other health care and decision-making groups.

  18. Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism.

    PubMed

    Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne

    2017-01-01

    Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds.

  19. Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism

    PubMed Central

    Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne

    2017-01-01

    Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds. PMID:28076405

  20. Streptococcus group B typing: comparison of counter-immunoelectrophoresis with the precipitin method.

    PubMed

    Kubín, V; Jelínková, J; Franêk, J

    1977-07-01

    The method of counter-immunoelectrophoresis (CIE) was tested for its applicability to group B streptococcus typing. The results obtained were compared with the typing by the ring precipitin test. Identical antigens and identical hyperimmune typing serum batches had been used in both methods. A large majority of 75 freshly isolated strains were typed identically by both methods. Five strains with a weak antigenic outfit were untypable by the ring precipitin test but were typed by CIE owing to a higher sensitivity of CIE method. Two strains were typable by the precipitin test but not by CIE; an explanation for this phenomenon is lacking. The CIE method in group B typing is specific, rapid, highly sensitive and relatively simple. It requires strict maintenance of standard conditions. The method is economical with respect to manipulation and material, requires small amounts of diagnostic antisera. Potent antisera may be used diluted. Moreover, sera for CIE typing need not be absorbed to remove group B antibodies. CIE method is practicable for group B streptococcus typing, especially in laboratories carrying out routine large scale type identification.

  1. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    PubMed Central

    Musmade, Kranti P.; Trilok, M.; Dengale, Swapnil J.; Bhat, Krishnamurthy; Reddy, M. S.; Musmade, Prashant B.; Udupa, N.

    2014-01-01

    A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated for quantification of naringin (NAR) in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm) column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v) as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1). The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV) less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity. PMID:26556205

  2. Development and Validation of an Extractive Spectrophotometric Method for Miconazole Nitrate Assay in Pharmaceutical Formulations.

    PubMed

    Eticha, Tadele; Kahsay, Getu; Hailu, Teklebrhan; Gebretsadikan, Tesfamichael; Asefa, Fitsum; Gebretsadik, Hailekiros; Thangabalan, Boovizhikannan

    2018-01-01

    A simple extractive spectrophotometric technique has been developed and validated for the determination of miconazole nitrate in pure and pharmaceutical formulations. The method is based on the formation of a chloroform-soluble ion-pair complex between the drug and bromocresol green (BCG) dye in an acidic medium. The complex showed absorption maxima at 422 nm, and the system obeys Beer's law in the concentration range of 1-30  µ g/mL with molar absorptivity of 2.285 × 10 4  L/mol/cm. The composition of the complex was studied by Job's method of continuous variation, and the results revealed that the mole ratio of drug : BCG is 1 : 1. Full factorial design was used to optimize the effect of variable factors, and the method was validated based on the ICH guidelines. The method was applied for the determination of miconazole nitrate in real samples.

  3. How to Assign Individualized Scores on a Group Project: An Empirical Evaluation

    ERIC Educational Resources Information Center

    Zhang, Bo; Ohland, Matthew W.

    2009-01-01

    One major challenge in using group projects to assess student learning is accounting for the differences of contribution among group members so that the mark assigned to each individual actually reflects their performance. This research addresses the validity of grading group projects by evaluating different methods that derive individualized…

  4. Validation of the tool assessment of clinical education (AssCE): A study using Delphi method and clinical experts.

    PubMed

    Löfmark, Anna; Mårtensson, Gunilla

    2017-03-01

    The aim of the present study was to establish the validity of the tool Assessment of Clinical Education (AssCE). The tool is widely used in Sweden and some Nordic countries for assessing nursing students' performance in clinical education. It is important that the tools in use be subjected to regular audit and critical reviews. The validation process, performed in two stages, was concluded with a high level of congruence. In the first stage, Delphi technique was used to elaborate the AssCE tool using a group of 35 clinical nurse lecturers. After three rounds, we reached consensus. In the second stage, a group of 46 clinical nurse lecturers representing 12 universities in Sweden and Norway audited the revised version of the AssCE in relation to learning outcomes from the last clinical course at their respective institutions. Validation of the revised AssCE was established with high congruence between the factors in the AssCE and examined learning outcomes. The revised AssCE tool seems to meet its objective to be a validated assessment tool for use in clinical nursing education. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A Method for Finding Metabolic Pathways Using Atomic Group Tracking.

    PubMed

    Huang, Yiran; Zhong, Cheng; Lin, Hai Xiang; Wang, Jianyi

    2017-01-01

    A fundamental computational problem in metabolic engineering is to find pathways between compounds. Pathfinding methods using atom tracking have been widely used to find biochemically relevant pathways. However, these methods require the user to define the atoms to be tracked. This may lead to failing to predict the pathways that do not conserve the user-defined atoms. In this work, we propose a pathfinding method called AGPathFinder to find biochemically relevant metabolic pathways between two given compounds. In AGPathFinder, we find alternative pathways by tracking the movement of atomic groups through metabolic networks and use combined information of reaction thermodynamics and compound similarity to guide the search towards more feasible pathways and better performance. The experimental results show that atomic group tracking enables our method to find pathways without the need of defining the atoms to be tracked, avoid hub metabolites, and obtain biochemically meaningful pathways. Our results also demonstrate that atomic group tracking, when incorporated with combined information of reaction thermodynamics and compound similarity, improves the quality of the found pathways. In most cases, the average compound inclusion accuracy and reaction inclusion accuracy for the top resulting pathways of our method are around 0.90 and 0.70, respectively, which are better than those of the existing methods. Additionally, AGPathFinder provides the information of thermodynamic feasibility and compound similarity for the resulting pathways.

  6. A Method for Finding Metabolic Pathways Using Atomic Group Tracking

    PubMed Central

    Zhong, Cheng; Lin, Hai Xiang; Wang, Jianyi

    2017-01-01

    A fundamental computational problem in metabolic engineering is to find pathways between compounds. Pathfinding methods using atom tracking have been widely used to find biochemically relevant pathways. However, these methods require the user to define the atoms to be tracked. This may lead to failing to predict the pathways that do not conserve the user-defined atoms. In this work, we propose a pathfinding method called AGPathFinder to find biochemically relevant metabolic pathways between two given compounds. In AGPathFinder, we find alternative pathways by tracking the movement of atomic groups through metabolic networks and use combined information of reaction thermodynamics and compound similarity to guide the search towards more feasible pathways and better performance. The experimental results show that atomic group tracking enables our method to find pathways without the need of defining the atoms to be tracked, avoid hub metabolites, and obtain biochemically meaningful pathways. Our results also demonstrate that atomic group tracking, when incorporated with combined information of reaction thermodynamics and compound similarity, improves the quality of the found pathways. In most cases, the average compound inclusion accuracy and reaction inclusion accuracy for the top resulting pathways of our method are around 0.90 and 0.70, respectively, which are better than those of the existing methods. Additionally, AGPathFinder provides the information of thermodynamic feasibility and compound similarity for the resulting pathways. PMID:28068354

  7. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    PubMed

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Effective-field renormalization-group method for Ising systems

    NASA Astrophysics Data System (ADS)

    Fittipaldi, I. P.; De Albuquerque, D. F.

    1992-02-01

    A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.

  9. Focus groups: a useful tool for curriculum evaluation.

    PubMed

    Frasier, P Y; Slatt, L; Kowlowitz, V; Kollisch, D O; Mintzer, M

    1997-01-01

    Focus group interviews have been used extensively in health services program planning, health education, and curriculum planning. However, with the exception of a few reports describing the use of focus groups for a basic science course evaluation and a clerkship's impact on medical students, the potential of focus groups as a tool for curriculum evaluation has not been explored. Focus groups are a valid stand-alone evaluation process, but they are most often used in combination with other quantitative and qualitative methods. Focus groups rely heavily on group interaction, combining elements of individual interviews and participant observation. This article compares the focus group interview with both quantitative and qualitative methods; discusses when to use focus group interviews; outlines a protocol for conducting focus groups, including a comparison of various styles of qualitative data analysis; and offers a case study, in which focus groups evaluated the effectiveness of a pilot preclinical curriculum.

  10. Fractal Clustering and Knowledge-driven Validation Assessment for Gene Expression Profiling.

    PubMed

    Wang, Lu-Yong; Balasubramanian, Ammaiappan; Chakraborty, Amit; Comaniciu, Dorin

    2005-01-01

    DNA microarray experiments generate a substantial amount of information about the global gene expression. Gene expression profiles can be represented as points in multi-dimensional space. It is essential to identify relevant groups of genes in biomedical research. Clustering is helpful in pattern recognition in gene expression profiles. A number of clustering techniques have been introduced. However, these traditional methods mainly utilize shape-based assumption or some distance metric to cluster the points in multi-dimension linear Euclidean space. Their results shows poor consistence with the functional annotation of genes in previous validation study. From a novel different perspective, we propose fractal clustering method to cluster genes using intrinsic (fractal) dimension from modern geometry. This method clusters points in such a way that points in the same clusters are more self-affine among themselves than to the points in other clusters. We assess this method using annotation-based validation assessment for gene clusters. It shows that this method is superior in identifying functional related gene groups than other traditional methods.

  11. Development of Creative Behavior Observation Form: A Study on Validity and Reliability

    ERIC Educational Resources Information Center

    Dere, Zeynep; Ömeroglu, Esra

    2018-01-01

    This study, Creative Behavior Observation Form was developed to assess creativity of the children. While the study group on the reliability and validity of Creative Behavior Observation Form was being developed, 257 children in total who were at the ages of 5-6 were used as samples with stratified sampling method. Content Validity Index (CVI) and…

  12. Grey situation group decision-making method based on prospect theory.

    PubMed

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.

  13. QuEChERS GC-MS validation and monitoring of pesticide residues in different foods in the tomato classification group.

    PubMed

    Ramírez Restrepo, Andrés; Gallo Ortiz, Andrés Fernando; Hoyos Ossa, Duvan Esteban; Peñuela Mesa, Gustavo Antonio

    2014-09-01

    The objective of this study was to validate (SANCO/12495/2011 and NTC-ISO/IEC 17025) multi-residue multi-class methods using QuEChERS sample preparation and GC-MS for the analysis of regulated pesticides in tomatoes (Solanum lycopersicum), tamarillos (Solanum betaceum) and goldenberries (Physalis peruviana). These Latin American products are representative and widely produced in Antioquia (Colombia). Sample preparation followed the UNE-EN 15662 method (150 mg MgSO4, 25mg primary secondary amines and 25mg of octadecylsiloxane for cleanup; graphitized carbon black was added for tomatoes). Extracts were injected using a programmed temperature-vaporizing injector. The residues were validated over a range from 0.02 mg/kg to 0.20 mg/kg, with 24 analytes validated in tomatoes, 33 in tamarillos and 28 in goldenberries. An initial risk assessment was enabled by monitoring 24 samples in the municipalities of El Peñol, Marinilla and San Vicente Ferrer. Risks were found for tomatoes, but no significant risks were found for tamarillos or goldenberries. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  15. Comparative Study in Laboratory Rats to Validate Sperm Quality Methods and Endpoints

    NASA Technical Reports Server (NTRS)

    Price, W. A.; Briggs, G. B.; Alexander, W. K.; Still, K. R.; Grasman, K. A.

    2000-01-01

    Abstract The Naval Health Research Center, Detachment (Toxicology) performs toxicity studies in laboratory animals to characterize the risk of exposure to chemicals of Navy interest. Research was conducted at the Toxicology Detachment at WPAFB, OH in collaboration with Wright State University, Department of Biological Sciences for the validation of new bioassay methods for evaluating reproductive toxicity. The Hamilton Thorne sperm analyzer was used to evaluate sperm damage produced by exposure to a known testicular toxic agent, methoxyacetic acid and by inhalation exposure to JP-8 and JP-5 in laboratory rats. Sperm quality parameters were evaluated (sperm concentration, motility, and morphology) to provide evidence of sperm damage. The Hamilton Thorne sperm analyzer utilizes a DNA specific fluorescent stain (similar to flow cytometry) and digitized optical computer analysis to detect sperm cell damage. The computer assisted sperm analysis (CASA) is a more rapid, robust, predictive and sensitive method for characterizing reproductive toxicity. The results presented in this poster report validation information showing exposure to methoxyacetic acid causes reproductive toxicity and inhalation exposure to JP-8 and JP-5 had no significant effects. The CASA method detects early changes that result in reproductive deficits and these data will be used in a continuing program to characterize the toxicity of chemicals, and combinations of chemicals, of military interest to formulate permissible exposure limits.

  16. FDIR Strategy Validation with the B Method

    NASA Astrophysics Data System (ADS)

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  17. A method for validation of finite element forming simulation on basis of a pointwise comparison of distance and curvature

    NASA Astrophysics Data System (ADS)

    Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank

    2016-10-01

    Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.

  18. Validity of endothelial cell analysis methods and recommendations for calibration in Topcon SP-2000P specular microscopy.

    PubMed

    van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M

    2005-07-01

    To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.

  19. Validation on milk and sprouts of EN ISO 16654:2001 - Microbiology of food and animal feeding stuffs - Horizontal method for the detection of Escherichia coli O157.

    PubMed

    Tozzoli, Rosangela; Maugliani, Antonella; Michelacci, Valeria; Minelli, Fabio; Caprioli, Alfredo; Morabito, Stefano

    2018-05-08

    In 2006, the European Committee for standardisation (CEN)/Technical Committee 275 - Food analysis - Horizontal methods/Working Group 6 - Microbiology of the food chain (TC275/WG6), launched the project of validating the method ISO 16654:2001 for the detection of Escherichia coli O157 in foodstuff by the evaluation of its performance, in terms of sensitivity and specificity, through collaborative studies. Previously, a validation study had been conducted to assess the performance of the Method No 164 developed by the Nordic Committee for Food Analysis (NMKL), which aims at detecting E. coli O157 in food as well, and is based on a procedure equivalent to that of the ISO 16654:2001 standard. Therefore, CEN established that the validation data obtained for the NMKL Method 164 could be exploited for the ISO 16654:2001 validation project, integrated with new data obtained through two additional interlaboratory studies on milk and sprouts, run in the framework of the CEN mandate No. M381. The ISO 16654:2001 validation project was led by the European Union Reference Laboratory for Escherichia coli including VTEC (EURL-VTEC), which organized the collaborative validation study on milk in 2012 with 15 participating laboratories and that on sprouts in 2014, with 14 participating laboratories. In both studies, a total of 24 samples were tested by each laboratory. Test materials were spiked with different concentration of E. coli O157 and the 24 samples corresponded to eight replicates of three levels of contamination: zero, low and high spiking level. The results submitted by the participating laboratories were analyzed to evaluate the sensitivity and specificity of the ISO 16654:2001 method when applied to milk and sprouts. The performance characteristics calculated on the data of the collaborative validation studies run under the CEN mandate No. M381 returned sensitivity and specificity of 100% and 94.4%, respectively for the milk study. As for sprouts matrix, the sensitivity

  20. Contributing to the ICNP: validating the term cultural diversity.

    PubMed

    Geyer, N; Peu, M D; Roussouw, S; Morudi, J; Uys, E

    2005-05-01

    The specific aims of this study were to: Propose a definition of the term cultural diversity; Validate the term cultural diversity; and Submit a term and definition for international utilisation to the International Council of Nurses (ICN) for consideration for inclusion in the ICNP. South Africa was one of four African countries (Botswana, South Africa, Swaziland, and Zimbabwe) funded by the WK Kellogg Foundation to participate in the ICNP project. South Africa had 2 research groups. One of the research groups identified the term cultural diversity to define. This was a qualitative study where a philosophical perspective was used to explore, explain and describe nursing practice. The combined method proposed by the International Council of Nurses (ICN) was utilised to define and validate the term cultural diversity. Validation and literature review provided sufficient support for the defined characteristics and the term was finally defined and submitted to ICN in November 2002 as: CULTURAL DIVERSITY is a type of CULTURE with the specific characteristics: co-existence of different groups, e.g. ethnic, religious, linguistic and other groups each with their own values and belief systems, traditions and different lifestyles. The research group was informed in December 2003 of the ICNP Evaluation Committee recommendation that the term cultural diversity will be included in the ICNP.

  1. Identifying areas with vitamin A deficiency: the validity of a semiquantitative food frequency method.

    PubMed

    Sloan, N L; Rosen, D; de la Paz, T; Arita, M; Temalilwa, C; Solomons, N W

    1997-02-01

    The prevalence of vitamin A deficiency has traditionally been assessed through xerophthalmia or biochemical surveys. The cost and complexity of implementing these methods limits the ability of nonresearch organizations to identify vitamin A deficiency. This study examined the validity of a simple, inexpensive food frequency method to identify areas with a high prevalence of vitamin A deficiency. The validity of the method was tested in 15 communities, 5 each from the Philippines, Guatemala, and Tanzania. Serum retinol concentrations of less than 20 micrograms/dL defined vitamin A deficiency. Weighted measures of vitamin A intake six or fewer times per week and unweighted measures of consumption of animal sources of vitamin A four or fewer times per week correctly classified seven of eight communities as having a high prevalence of vitamin A deficiency (i.e., 15% or more preschool-aged children in the community had the deficiency) (sensitivity = 87.5%) and four of seven communities as having a low prevalence (specificity = 57.1%). This method correctly classified the vitamin A deficiency status of 73.3% of the communities but demonstrated a high false-positive rate (42.9%).

  2. Validation of Diagnostic Measures Based on Latent Class Analysis: A Step Forward in Response Bias Research

    ERIC Educational Resources Information Center

    Thomas, Michael L.; Lanyon, Richard I.; Millsap, Roger E.

    2009-01-01

    The use of criterion group validation is hindered by the difficulty of classifying individuals on latent constructs. Latent class analysis (LCA) is a method that can be used for determining the validity of scales meant to assess latent constructs without such a priori classifications. The authors used this method to examine the ability of the L…

  3. Validation and Recommendation of Methods to Measure Biogas Production Potential of Animal Manure

    PubMed Central

    Pham, C. H.; Triolo, J. M.; Cu, T. T. T.; Pedersen, L.; Sommer, S. G.

    2013-01-01

    In developing countries, biogas energy production is seen as a technology that can provide clean energy in poor regions and reduce pollution caused by animal manure. Laboratories in these countries have little access to advanced gas measuring equipment, which may limit research aimed at improving local adapted biogas production. They may also be unable to produce valid estimates of an international standard that can be used for articles published in international peer-reviewed science journals. This study tested and validated methods for measuring total biogas and methane (CH4) production using batch fermentation and for characterizing the biomass. The biochemical methane potential (BMP) (CH4 NL kg−1 VS) of pig manure, cow manure and cellulose determined with the Moller and VDI methods was not significantly different in this test (p>0.05). The biodegradability using a ratio of BMP and theoretical BMP (TBMP) was slightly higher using the Hansen method, but differences were not significant. Degradation rate assessed by methane formation rate showed wide variation within the batch method tested. The first-order kinetics constant k for the cumulative methane production curve was highest when two animal manures were fermented using the VDI 4630 method, indicating that this method was able to reach steady conditions in a shorter time, reducing fermentation duration. In precision tests, the repeatability of the relative standard deviation (RSDr) for all batch methods was very low (4.8 to 8.1%), while the reproducibility of the relative standard deviation (RSDR) varied widely, from 7.3 to 19.8%. In determination of biomethane concentration, the values obtained using the liquid replacement method (LRM) were comparable to those obtained using gas chromatography (GC). This indicates that the LRM method could be used to determine biomethane concentration in biogas in laboratories with limited access to GC. PMID:25049861

  4. Intensity non-uniformity correction in MRI: existing methods and their validation.

    PubMed

    Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues

    2006-04-01

    Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.

  5. Program to analyze aquifer test data and check for validity with the jacob method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, M.S.

    1993-01-01

    The Jacob straight-line method of aquifer analysis deals with the late-time data and small radius of the Theis type curve which plot as a straight line if the drawdown data are plotted on an arithmetic scale and the time data on a logarithmic (base 10) scale. Correct analysis with the Jacob method normally assumes that (1) the data lie on a straight line, (2) the value of the dimensionless time factor is less than 0.01, and (3) the site's hydrogeology conforms to the method's assumptions and limiting conditions. Items 1 and 2 are usually considered for the Jacob method, butmore » item 3 is often ignored, which can lead to incorrect calculations of aquifer parameters. A BASIC computer program was developed to analyze aquifer test data with the Jacob method to test the validity of its use. Aquifer test data are entered into the program and manipulated so that a slope and time intercept of the straight line drawn through the data (excluding early-time and late-time data) can be used to calculate transmissivity and storage coefficient. Late-time data are excluded to eliminate the effects of positive and negative boundaries. The time-drawdown data then are converted into dimensionless units to determine if the Jacob method's assumptions are valid for the hydrogeologic conditions under which the test was conducted.« less

  6. Pesticide applicators questionnaire content validation: A fuzzy delphi method.

    PubMed

    Manakandan, S K; Rosnah, I; Mohd Ridhuan, J; Priya, R

    2017-08-01

    The most crucial step in forming a set of survey questionnaire is deciding the appropriate items in a construct. Retaining irrelevant items and removing important items will certainly mislead the direction of a particular study. This article demonstrates Fuzzy Delphi method as one of the scientific analysis technique to consolidate consensus agreement within a panel of experts pertaining to each item's appropriateness. This method reduces the ambiguity, diversity, and discrepancy of the opinions among the experts hence enhances the quality of the selected items. The main purpose of this study was to obtain experts' consensus on the suitability of the preselected items on the questionnaire. The panel consists of sixteen experts from the Occupational and Environmental Health Unit of Ministry of Health, Vector-borne Disease Control Unit of Ministry of Health and Occupational and Safety Health Unit of both public and private universities. A set of questionnaires related to noise and chemical exposure were compiled based on the literature search. There was a total of six constructs with 60 items in which three constructs for knowledge, attitude, and practice of noise exposure and three constructs for knowledge, attitude, and practice of chemical exposure. The validation process replicated recent Fuzzy Delphi method that using a concept of Triangular Fuzzy Numbers and Defuzzification process. A 100% response rate was obtained from all the sixteen experts with an average Likert scoring of four to five. Post FDM analysis, the first prerequisite was fulfilled with a threshold value (d) ≤ 0.2, hence all the six constructs were accepted. For the second prerequisite, three items (21%) from noise-attitude construct and four items (40%) from chemical-practice construct had expert consensus lesser than 75%, which giving rise to about 12% from the total items in the questionnaire. The third prerequisite was used to rank the items within the constructs by calculating the average

  7. Development and validation of reversed phase high performance liquid chromatography method for determination of dexpanthenol in pharmaceutical formulations.

    PubMed

    Kulikov, A U; Zinchenko, A A

    2007-02-19

    This paper describes the validation of an isocratic HPLC method for the assay of dexpanthenol in aerosol and gel. The method employs the Vydac Proteins C4 column with a mobile phase of aqueous solution of trifluoroacetic acid and UV detection at 206 nm. A linear response (r>0.9999) was observed in the range of 13.0-130 microg mL(-1). The method shows good recoveries and intra and inter-day relative standard deviations were less than 1.0%. Validation parameters as specificity, accuracy and robustness were also determined. The method can be used for dexpanthenol assay of panthenol aerosol and gel with dexpanthenol as the method separates dexpanthenol from aerosol or gel excipients.

  8. Improvement and validation of the method to determine neutral detergent fiber in feed.

    PubMed

    Hiraoka, Hisaaki; Fukunaka, Rie; Ishikuro, Eiichi; Enishi, Osamu; Goto, Tetsuhisa

    2012-10-01

    To improve the performance of the analytical method for neutral detergent fiber in feed with heat-stable α-amylase treatment (aNDFom), the process of adding heat-stable α-amylase, as well as other analytical conditions, were examined. In this new process, the starch in the samples was removed by adding amylase to neutral detergent (ND) solution twice, just after the start of heating and immediately after refluxing. We also examined the effects of the use of sodium sulfite, and drying and ashing conditions for aNDFom analysis by this modified amylase addition method. A collaborative study to validate this new method was carried out with 15 laboratories. These laboratories analyzed two samples, alfalfa pellet and dairy mixed feed, with blind duplicates. Ten laboratories used a conventional apparatus and five used a Fibertec(®) type apparatus. There were no significant differences in aNDFom values between these two refluxing apparatuses. The aNDFom values in alfalfa pellet and dairy mixed feed were 388 g/kg and 145 g/kg, the coefficients of variation for the repeatability and reproducibility (CV(r) and CV(R) ) were 1.3% and 2.9%, and the HorRat values were 0.8 and 1.1, respectively. This new method was validated with 5.8% uncertainty (k = 2) from the collaborative study. © 2012 The Authors. Animal Science Journal © 2012 Japanese Society of Animal Science.

  9. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR

    PubMed Central

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-01-01

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350

  10. Grey Situation Group Decision-Making Method Based on Prospect Theory

    PubMed Central

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example. PMID:25197706

  11. Validation of a method for assessing resident physicians' quality improvement proposals.

    PubMed

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  12. Validation of an ultra-fast UPLC-UV method for the separation of antituberculosis tablets.

    PubMed

    Nguyen, Dao T-T; Guillarme, Davy; Rudaz, Serge; Veuthey, Jean-Luc

    2008-04-01

    A simple method using ultra performance LC (UPLC) coupled with UV detection was developed and validated for the determination of antituberculosis drugs in combined dosage form, i. e. isoniazid (ISN), pyrazinamide (PYR) and rifampicin (RIF). Drugs were separated on a short column (2.1 mm x 50 mm) packed with 1.7 mum particles, using an elution gradient procedure. At 30 degrees C, less than 2 min was necessary for the complete separation of the three antituberculosis drugs, while the original USP method was performed in 15 min. Further improvements were obtained with the combination of UPLC and high temperature (up to 90 degrees C), namely HT-UPLC, which allows the application of higher mobile phase flow rates. Therefore, the separation of ISN, PYR and RIF was performed in less than 1 min. After validation (selectivity, trueness, precision and accuracy), both methods (UPLC and HT-UPLC) have proven suitable for the routine quality control analysis of antituberculosis drugs in combined dosage form. Additionally, a large number of samples per day can be analysed due to the short analysis times.

  13. Development and validation of an HPLC method to quantify camptothecin in polymeric nanocapsule suspensions.

    PubMed

    Granada, Andréa; Murakami, Fabio S; Sartori, Tatiane; Lemos-Senna, Elenara; Silva, Marcos A S

    2008-01-01

    A simple, rapid, and sensitive reversed-phase column high-performance liquid chromatographic method was developed and validated to quantify camptothecin (CPT) in polymeric nanocapsule suspensions. The chromatographic separation was performed on a Supelcosil LC-18 column (15 cm x 4.6 mm id, 5 microm) using a mobile phase consisting of methanol-10 mM KH2PO4 (60 + 40, v/v; pH 2.8) at a flow rate of 1.0 mL/min and ultraviolet detection at 254 nm. The calibration graph was linear from 0.5 to 3.0 microg/mL with a correlation coefficient of 0.9979, and the limit of quantitation was 0.35 microg/mL. The assay recovery ranged from 97.3 to 105.0%. The intraday and interday relative standard deviation values were < 5.0%. The validation results confirmed that the developed method is specific, linear, accurate, and precise for its intended use. The current method was successfully applied to the evaluation of CPT entrapment efficiency and drug content in polymeric nanocapsule suspensions during the early stage of formulation development.

  14. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  16. Methods for improved growth of group III nitride buffer layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melnik, Yurity; Chen, Lu; Kojiri, Hidehiro

    Methods are disclosed for growing high crystal quality group III-nitride epitaxial layers with advanced multiple buffer layer techniques. In an embodiment, a method includes forming group III-nitride buffer layers that contain aluminum on suitable substrate in a processing chamber of a hydride vapor phase epitaxy processing system. A hydrogen halide or halogen gas is flowing into the growth zone during deposition of buffer layers to suppress homogeneous particle formation. Some combinations of low temperature buffers that contain aluminum (e.g., AlN, AlGaN) and high temperature buffers that contain aluminum (e.g., AlN, AlGaN) may be used to improve crystal quality and morphologymore » of subsequently grown group III-nitride epitaxial layers. The buffer may be deposited on the substrate, or on the surface of another buffer. The additional buffer layers may be added as interlayers in group III-nitride layers (e.g., GaN, AlGaN, AlN).« less

  17. Extensive validation of the pain disability index in 3 groups of patients with musculoskeletal pain.

    PubMed

    Soer, Remko; Köke, Albère J A; Vroomen, Patrick C A J; Stegeman, Patrick; Smeets, Rob J E M; Coppes, Maarten H; Reneman, Michiel F

    2013-04-20

    A cross-sectional study design was performed. To validate the pain disability index (PDI) extensively in 3 groups of patients with musculoskeletal pain. The PDI is a widely used and studied instrument for disability related to various pain syndromes, although there is conflicting evidence concerning factor structure, test-retest reliability, and missing items. Additionally, an official translation of the Dutch language version has never been performed. For reliability, internal consistency, factor structure, test-retest reliability and measurement error were calculated. Validity was tested with hypothesized correlations with pain intensity, kinesiophobia, Rand-36 subscales, Depression, Roland-Morris Disability Questionnaire, Quality of Life, and Work Status. Structural validity was tested with independent backward translation and approval from the original authors. One hundred seventy-eight patients with acute back pain, 425 patients with chronic low back pain and 365 with widespread pain were included. Internal consistency of the PDI was good. One factor was identified with factor analyses. Test-retest reliability was good for the PDI (intraclass correlation coefficient, 0.76). Standard error of measurement was 6.5 points and smallest detectable change was 17.9 points. Little correlations between the PDI were observed with kinesiophobia and depression, fair correlations with pain intensity, work status, and vitality and moderate correlations with the Rand-36 subscales and the Roland-Morris Disability Questionnaire. The PDI-Dutch language version is internally consistent as a 1-factor structure, and test-retest reliable. Missing items seem high in sexual and professional items. Using the PDI as a 2-factor questionnaire has no additional value and is unreliable.

  18. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  19. Validation of the SCEC broadband platform V14.3 simulation methods using pseudo spectral acceleration data

    USGS Publications Warehouse

    Dreger, Douglas S.; Beroza, Gregory C.; Day, Steven M.; Goulet, Christine A.; Jordan, Thomas H; Spudich, Paul A.; Stewart, Jonathan P.

    2015-01-01

    This paper summarizes the evaluation of ground motion simulation methods implemented on the SCEC Broadband Platform (BBP), version 14.3 (as of March 2014). A seven-member panel, the authorship of this article, was formed to evaluate those methods for the prediction of pseudo-­‐spectral accelerations (PSAs) of ground motion. The panel’s mandate was to evaluate the methods using tools developed through the validation exercise (Goulet et al. ,2014), and to define validation metrics for the assessment of the methods’ performance. This paper summarizes the evaluation process and conclusions from the panel. The five broadband, finite-source simulation methods on the BBP include two deterministic approaches herein referred to as CSM (Anderson, 2014) and UCSB (Crempien and Archuleta, 2014); a band-­‐limited stochastic white noise method called EXSIM (Atkinson and Assatourians, 2014); and two hybrid approaches, referred to as G&P (Graves and Pitarka, 2014) and SDSU (Olsen and Takedatsu, 2014), which utilize a deterministic Green’s function approach for periods longer than 1 second and stochastic methods for periods shorter than 1 second. Two acceptance tests were defined to validate the broadband finite‐source ground methods (Goulet et al., 2014). Part A compared observed and simulated PSAs for periods from 0.01 to 10 seconds for 12 moderate to large earthquakes located in California, Japan, and the eastern US. Part B compared the median simulated PSAs to published NGA-­‐West1 (Abrahamson and Silva, 2008; Boore and Atkinson, 2008; Campbell and Bozorgnia, 2008; and Chiou and Youngs, 2008) ground motion prediction equations (GMPEs) for specific magnitude and distance cases using a pass-­‐fail criteria based on a defined acceptable range around the spectral shape of the GMPEs. For the initial Part A and Part B validation exercises during the summer of 2013, the software for the five methods was locked in at version 13.6 (see Maechling et al., 2014). In the

  20. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  1. Development and initial validity of the in-hand manipulation assessment.

    PubMed

    Klymenko, Gabrielle; Liu, Karen P Y; Bissett, Michelle; Fong, Kenneth N K; Welage, Nandana; Wong, Rebecca S M

    2018-04-01

    A review of the literature related to in-hand manipulation (IHM) revealed that there is no assessment which specifically measures this construct in the adult population. This study reports the face and content validity of an IHM assessment for adults with impaired hand function based on expert opinion. The definition of IHM skills, assessment tasks and scoring methods identified from literature was discussed in a focus group (n = 4) to establish face validity. An expert panel (n = 16) reviewed the content validity of the proposed assessment; evaluating the representativeness and relevance of encompassing the IHM skills in the proposed assessment tasks, the clarity and importance to daily life of the task and the clarity and applicability to clinical environment of the scoring method. The content validity was calculated using the content validity index for both the individual task and all tasks together (I-CVI and S-CVI). Feedback was incorporated to create the assessment. The focus group members agreed to include 10 assessment tasks that covered all IHM skills. In the expert panel review, all tasks received an I-CVI above 0.78 and S-CVI above 0.80 in representativeness and relevance ratings, representing good content validity. With the comments from the expert panel, tasks were modified to improve the clarity and importance to daily life. A four-point Likert scale was identified for assessing both the completion of the assessment tasks and the quality of IHM skills within the task performance. Face and content validity were established in this new IHM assessment. Further studies to examine psychometric properties and use within clinical practice are recommended. © 2018 Occupational Therapy Australia.

  2. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  3. Analytical method validation to evaluate dithiocarbamates degradation in biobeds in South of Brazil.

    PubMed

    Vareli, Catiucia S; Pizzutti, Ionara R; Gebler, Luciano; Cardoso, Carmem D; Gai, Daniela S H; Fontana, Marlos E Z

    2018-07-01

    In order to evaluate the efficiency of biobeds on DTC degradation, the aim of this study was to apply, optimize and validate a method to determine dithiocarbamate (mancozeb) in biobeds using gas chromatography-tandem mass spectrometry (GC-MS). The DTC pesticide mancozeb was hydrolysed in a tin (II) chloride solution at 1.5% in HCl (4 mol L -1 ), during 1 h in a water bath at 80 °C, and the CS 2 formed was extracted in isooctane. After cooling, 1 mL of the organic layer was transferred to an auto sampler vial and analyzed by GC-MS. A complete validation study was performed and the following parameters were assessed: linearity of the analytical curve (r 2 ), estimated method and instrument limits of detection and limits of quantification (LODm, LODi, LOQm and LOQi, respectively), accuracy (recovery%), precision (RSD%) and matrix effects. Recovery experiments were carried out with a standard spiking solution of the DTC pesticide thiram. Blank biobed (biomixture) samples were spiked at the three levels corresponding to the CS 2 concentrations of 1, 3 and 5 mg kg -1 , with seven replicates each (n = 7). The method presented satisfactory accuracy, with recoveries within the range of 89-96% and RSD ≤ 11%. The analytical curves were linear in the concentration range of 0.05-10 µg CS 2 mL -1 (r 2 > 0.9946). LODm and LOQm were 0.1 and 0.5 mg CS 2 kg -1 , respectively, and the calculated matrix effects were not significant (≤ 20%). The validated method was applied to 80 samples (biomixture), from sixteen different biobeds (collected at five sampling times) during fourteen months. Ten percent of samples presented CS 2 concentration below the LOD (0.1 mg CS 2 kg -1 ) and 49% of them showed results below the LOQ (0.5 mg CS 2 kg -1 ), which demonstrates the biobeds capability to degrade DTC. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    PubMed

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  6. Development and validation of a UPLC method for the determination of duloxetine hydrochloride residues on pharmaceutical manufacturing equipment surfaces

    PubMed Central

    Kumar, Navneet; Sangeetha, D.; Balakrishna, P.

    2011-01-01

    Background: In pharmaceutical industries, it is very important to remove drug residues from the equipment and areas used. The cleaning procedure must be validated, so special attention must be devoted to the methods used for analysis of trace amounts of drugs. A rapid, sensitive, and specific reverse phase ultra-performance liquid chromatographic (UPLC) method was developed for the quantitative determination of duloxetine in cleaning validation swab samples. Material and Methods: The method was validated using an Acquity UPLC™ HSS T3 (100 × 2.1 mm2) 1.8 μm column with a isocratic mobile phase containing a mixture of 0.01 M potassium dihydrogen orthophosphate, pH adjusted to 3.0 with orthophosphoric acid and acetonitrile (60:40 v/v). The flow rate of the mobile phase was 0.4 ml/min with a column temperature of 40°C and detection wavelength at 230 nm. Cotton swabs, moisten with extraction solution (90% methanol and 10% water), were used to remove any residue of drug from stainless steel, glass and silica surfaces, and give recoveries >80% at four concentration levels. Results: The precision of the results, reported as the relative standard deviation, were below 1.5%. The calibration curve was linear over a concentration range from 0.02 to 5.0 μg/ml with a correlation coefficient of 0.999. The detection limit and quantitation limit were 0.006 and 0.02 μg/ml, respectively. The method was validated over a concentration range of 0.05–5.0 μg/ml. Conclusion: The developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision, and robustness. PMID:23781449

  7. Simultaneous quantification of paracetamol, acetylsalicylic acid and papaverine with a validated HPLC method.

    PubMed

    Kalmár, Eva; Gyuricza, Anett; Kunos-Tóth, Erika; Szakonyi, Gerda; Dombi, György

    2014-01-01

    Combined drug products have the advantages of better patient compliance and possible synergic effects. The simultaneous application of several active ingredients at a time is therefore frequently chosen. However, the quantitative analysis of such medicines can be challenging. The aim of this study is to provide a validated method for the investigation of a multidose packed oral powder that contained acetylsalicylic acid, paracetamol and papaverine-HCl. Reversed-phase high-pressure liquid chromatography was used. The Agilent Zorbax SB-C18 column was found to be the most suitable of the three different stationary phases tested for the separation of the components of this sample. The key parameters in the method development (apart from the nature of the column) were the pH of the aqueous phase (set to 3.4) and the ratio of the organic (acetonitrile) and the aqueous (25 mM phosphate buffer) phases, which was varied from 7:93 (v/v) to 25:75 (v/v) in a linear gradient, preceded by an initial hold. The method was validated: linearity, precision (repeatability and intermediate precision), accuracy, specificity and robustness were all tested, and the results met the ICH guidelines. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. A photographic method to measure food item intake. Validation in geriatric institutions.

    PubMed

    Pouyet, Virginie; Cuvelier, Gérard; Benattar, Linda; Giboreau, Agnès

    2015-01-01

    From both a clinical and research perspective, measuring food intake is an important issue in geriatric institutions. However, weighing food in this context can be complex, particularly when the items remaining on a plate (side dish, meat or fish and sauce) need to be weighed separately following consumption. A method based on photography that involves taking photographs after a meal to determine food intake consequently seems to be a good alternative. This method enables the storage of raw data so that unhurried analyses can be performed to distinguish the food items present in the images. Therefore, the aim of this paper was to validate a photographic method to measure food intake in terms of differentiating food item intake in the context of a geriatric institution. Sixty-six elderly residents took part in this study, which was performed in four French nursing homes. Four dishes of standardized portions were offered to the residents during 16 different lunchtimes. Three non-trained assessors then independently estimated both the total and specific food item intakes of the participants using images of their plates taken after the meal (photographic method) and a reference image of one plate taken before the meal. Total food intakes were also recorded by weighing the food. To test the reliability of the photographic method, agreements between different assessors and agreements among various estimates made by the same assessor were evaluated. To test the accuracy and specificity of this method, food intake estimates for the four dishes were compared with the food intakes determined using the weighed food method. To illustrate the added value of the photographic method, food consumption differences between the dishes were explained by investigating the intakes of specific food items. Although they were not specifically trained for this purpose, the results demonstrated that the assessor estimates agreed between assessors and among various estimates made by the same

  9. Method of fabricating vertically aligned group III-V nanowires

    DOEpatents

    Wang, George T; Li, Qiming

    2014-11-25

    A top-down method of fabricating vertically aligned Group III-V micro- and nanowires uses a two-step etch process that adds a selective anisotropic wet etch after an initial plasma etch to remove the dry etch damage while enabling micro/nanowires with straight and smooth faceted sidewalls and controllable diameters independent of pitch. The method enables the fabrication of nanowire lasers, LEDs, and solar cells.

  10. Degradation Kinetics Study of Alogliptin Benzoate in Alkaline Medium by Validated Stability-Indicating HPTLC Method.

    PubMed

    Bodiwala, Kunjan Bharatkumar; Shah, Shailesh; Thakor, Jeenal; Marolia, Bhavin; Prajapati, Pintu

    2016-11-01

    A rapid, sensitive, and stability-indicating high-performance thin-layer chromatographic method was developed and validated to study degradation kinetics of Alogliptin benzoate (ALG) in an alkaline medium. ALG was degraded under acidic, alkaline, oxidative, and thermal stress conditions. The degraded samples were chromatographed on silica gel 60F254-TLC plates, developed using a quaternary-solvent system (chloroform-methanol-ethyl acetate-triethyl amine, 9+1+1+0.5, v/v/v/v), and scanned at 278 nm. The developed method was validated per International Conference on Harmonization guidelines using validation parameters such as specificity, linearity and range, precision, accuracy, LOD, and LOQ. The linearity range for ALG was 100-500 ng/band (correlation coefficient = 0.9997) with an average recovery of 99.47%. The LOD and LOQ for ALG were 9.8 and 32.7 ng/band, respectively. The developed method was successfully applied for the quantitative estimation of ALG in its synthetic mixture with common excipients. Degradation kinetics of ALG in an alkaline medium was studied by degrading it under three different temperatures and three different concentrations of alkali. Degradation of ALG in the alkaline medium was found to follow first-order kinetics. Contour plots have been generated to predict degradation rate constant, half-life, and shelf life of ALG in various combinations of temperature and concentration of alkali using Design Expert software.

  11. IEEE/NASA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation

    NASA Technical Reports Server (NTRS)

    Margaria, Tiziana (Editor); Steffen, Bernhard (Editor); Hichey, Michael G.

    2005-01-01

    This volume contains the Preliminary Proceedings of the 2005 IEEE ISoLA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation, with a special track on the theme of Formal Methods in Human and Robotic Space Exploration. The workshop was held on 23-24 September 2005 at the Loyola College Graduate Center, Columbia, MD, USA. The idea behind the Workshop arose from the experience and feedback of ISoLA 2004, the 1st International Symposium on Leveraging Applications of Formal Methods held in Paphos (Cyprus) last October-November. ISoLA 2004 served the need of providing a forum for developers, users, and researchers to discuss issues related to the adoption and use of rigorous tools and methods for the specification, analysis, verification, certification, construction, test, and maintenance of systems from the point of view of their different application domains.

  12. Formal methods and their role in digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1995-01-01

    This report is based on one prepared as a chapter for the FAA Digital Systems Validation Handbook (a guide to assist FAA certification specialists with advanced technology issues). Its purpose is to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used in critical applications; and to suggest factors for consideration when formal methods are offered in support of certification. The presentation concentrates on the rationale for formal methods and on their contribution to assurance for critical applications within a context such as that provided by DO-178B (the guidelines for software used on board civil aircraft); it is intended as an introduction for those to whom these topics are new.

  13. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  14. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  15. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  16. A validation method for near-infrared spectroscopy based tissue oximeters for cerebral and somatic tissue oxygen saturation measurements.

    PubMed

    Benni, Paul B; MacLeod, David; Ikeda, Keita; Lin, Hung-Mo

    2018-04-01

    We describe the validation methodology for the NIRS based FORE-SIGHT ELITE ® (CAS Medical Systems, Inc., Branford, CT, USA) tissue oximeter for cerebral and somatic tissue oxygen saturation (StO 2 ) measurements for adult subjects submitted to the United States Food and Drug Administration (FDA) to obtain clearance for clinical use. This validation methodology evolved from a history of NIRS validations in the literature and FDA recommended use of Deming regression and bootstrapping statistical validation methods. For cerebral validation, forehead cerebral StO 2 measurements were compared to a weighted 70:30 reference (REF CX B ) of co-oximeter internal jugular venous and arterial blood saturation of healthy adult subjects during a controlled hypoxia sequence, with a sensor placed on the forehead. For somatic validation, somatic StO 2 measurements were compared to a weighted 70:30 reference (REF CX S ) of co-oximetry central venous and arterial saturation values following a similar protocol, with sensors place on the flank, quadriceps muscle, and calf muscle. With informed consent, 25 subjects successfully completed the cerebral validation study. The bias and precision (1 SD) of cerebral StO 2 compared to REF CX B was -0.14 ± 3.07%. With informed consent, 24 subjects successfully completed the somatic validation study. The bias and precision of somatic StO 2 compared to REF CX S was 0.04 ± 4.22% from the average of flank, quadriceps, and calf StO 2 measurements to best represent the global whole body REF CX S . The NIRS validation methods presented potentially provide a reliable means to test NIRS monitors and qualify them for clinical use.

  17. Validation Thin Layer Chromatography for the Determination of Acetaminophen in Tablets and Comparison with a Pharmacopeial Method

    PubMed Central

    Pyka, Alina; Budzisz, Marika; Dołowy, Małgorzata

    2013-01-01

    Adsorption thin layer chromatography (NP-TLC) with densitometry has been established for the identification and the quantification of acetaminophen in three leading commercial products of pharmaceutical tablets coded as brand: P1 (Product no. 1), P2 (Product no. 2), and P3 (Product no. 3). Applied chromatographic conditions have separated acetaminophen from its related substances, namely, 4-aminophenol and and 4′-chloroacetanilide. UV densitometry was performed in absorbance mode at 248 nm. The presented method was validated by specificity, range, linearity, accuracy, precision, detection limit, quantitative limit, and robustness. The TLC-densitometric method was also compared with a pharmacopeial UV-spectrophotometric method for the assay of acetaminophen, and the results confirmed statistically that the NP-TLC-densitometric method can be used as a substitute method. It could be said that the validated NP-TLC-densitometric method is suitable for the routine analysis of acetaminophen in quantity control laboratories. PMID:24063006

  18. QUANTIFICATION OF GLYCYRRHIZIN BIOMARKER IN GLYCYRRHIZA GLABRA RHIZOME AND BABY HERBAL FORMULATIONS BY VALIDATED RP-HPTLC METHODS

    PubMed Central

    Alam, Prawez; Foudah, Ahmed I.; Zaatout, Hala H.; T, Kamal Y; Abdel-Kader, Maged S.

    2017-01-01

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 silica gel 60 F254S HPTLC plates using methanol-water (7: 3 v/v) as mobile phase. Results: The developed plate was scanned and quantified densitometrically at 256 nm. Glycyrrhizin peaks from Glycyrrhiza glabra rhizome and baby herbal formulations were identified by comparing their single spot at Rf = 0.63 ± 0.01. Linear regression analysis revealed a good linear relationship between peak area and amount of glycyrrhizin in the range of 2000-7000 ng/band. Conclusion: The method was validated, in accordance with ICH guidelines for precision, accuracy, and robustness. The proposed method will be useful to enumerate the therapeutic dose of glycyrrhizin in herbal formulations as well as in bulk drug. PMID:28573236

  19. The time-dependent density matrix renormalisation group method

    NASA Astrophysics Data System (ADS)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  20. Validated spectrofluorimetric methods for the determination of apixaban and tirofiban hydrochloride in pharmaceutical formulations.

    PubMed

    El-Bagary, Ramzia I; Elkady, Ehab F; Farid, Naira A; Youssef, Nadia F

    2017-03-05

    Apixaban and Tirofiban Hydrochloride are low molecular weight anticoagulants. The two drugs exhibit native fluorescence that allow the development of simple and valid spectrofluorimetric methods for the determination of Apixaban at λ ex/λ em=284/450nm and tirofiban HCl at λ ex/λ em=227/300nm in aqueous media. Different experimental parameters affecting fluorescence intensities were carefully studied and optimized. The fluorescence intensity-concentration plots were linear over the ranges of 0.2-6μgml -1 for apixaban and 0.2-5μgml -1 for tirofiban HCl. The limits of detection were 0.017 and 0.019μgml -1 and quantification limits were 0.057 and 0.066μgml -1 for apixaban and tirofiban HCl, respectively. The fluorescence quantum yield of apixaban and tirofiban were calculated with values of 0.43 and 0.49. Method validation was evaluated for linearity, specificity, accuracy, precision and robustness as per ICH guidelines. The proposed spectrofluorimetric methods were successfully applied for the determination of apixaban in Eliquis tablets and tirofiban HCl in Aggrastat intravenous infusion. Tolerance ratio was tested to study the effect of foreign interferences from dosage forms excipients. Using Student's t and F tests, revealed no statistically difference between the developed spectrofluorimetric methods and the comparison methods regarding the accuracy and precision, so can be contributed to the analysis of apixaban and tirofiban HCl in QC laboratories as an alternative method. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Validated spectrofluorometric methods for determination of amlodipine besylate in tablets

    NASA Astrophysics Data System (ADS)

    Abdel-Wadood, Hanaa M.; Mohamed, Niveen A.; Mahmoud, Ashraf M.

    2008-08-01

    Two simple and sensitive spectrofluorometric methods have been developed and validated for determination of amlodipine besylate (AML) in tablets. The first method was based on the condensation reaction of AML with ninhydrin and phenylacetaldehyde in buffered medium (pH 7.0) resulting in formation of a green fluorescent product, which exhibits excitation and emission maxima at 375 and 480 nm, respectively. The second method was based on the reaction of AML with 7-chloro-4-nitro-2,1,3-benzoxadiazole (NBD-Cl) in a buffered medium (pH 8.6) resulting in formation of a highly fluorescent product, which was measured fluorometrically at 535 nm ( λex, 480 nm). The factors affecting the reactions were studied and optimized. Under the optimum reaction conditions, linear relationships with good correlation coefficients (0.9949-0.9997) were found between the fluorescence intensity and the concentrations of AML in the concentration range of 0.35-1.8 and 0.55-3.0 μg ml -1 for ninhydrin and NBD-Cl methods, respectively. The limits of assays detection were 0.09 and 0.16 μg ml -1 for the first and second method, respectively. The precisions of the methods were satisfactory; the relative standard deviations were ranged from 1.69 to 1.98%. The proposed methods were successfully applied to the analysis of AML in pure and pharmaceutical dosage forms with good accuracy; the recovery percentages ranged from 100.4-100.8 ± 1.70-2.32%. The results were compared favorably with those of the reported method.

  2. Validity and applicability of a new recording method for hypertension.

    PubMed

    Mas-Heredia, Minerva; Molés-Moliner, Eloisa; González-de Paz, Luis; Kostov, Belchin; Ortiz-Molina, Jacinto; Mauri-Vázquez, Vanesa; Menacho-Pascual, Ignacio; Cararach-Salami, Daniel; Sierra-Benito, Cristina; Sisó-Almirall, Antoni

    2014-09-01

    Blood pressure measurement methods and conditions are determinants of hypertension diagnosis. A recent British guideline recommends systematic 24-h ambulatory blood pressure monitoring. However, these devices are not available at all health centers and they can only be used by 1 patient per day. The aim of this study was to test a new blood pressure recording method to see if it gave the same diagnostic results as 24-h blood pressure monitoring. One-hour blood pressure monitoring under routine clinical practice conditions was compared with standard method of day time recording by analyzing the coefficient of correlation and Bland-Altman plots. The Kappa index was used to calculate degree of agreement. Method sensitivity and specificity were also analyzed. Of the 102 participants, 89 (87.3%) obtained the same diagnosis regardless of method, with high between-method agreement (κ= 0.81; 95% confidence interval, 0.71-0.91). We observed robust correlations between diastolic (r=0.85) and systolic blood pressure (r=0.76) readings. Sensitivity and specificity for the new method for diagnosing white coat hypertension were 85.2% (95% confidence interval 67.5%-94.1%) and 92% (95% confidence interval, 83.6%-96.3%), respectively. One-hour blood pressure monitoring is a valid and reliable method for diagnosing hypertension and for classifying hypertension subpopulations, especially in white coat hypertension and refractory hypertension. This also leads to a more productive use of monitoring instruments. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  3. Lesson 6: Signature Validation

    EPA Pesticide Factsheets

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  4. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries

  5. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR.

    PubMed

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-11-16

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. An automation-assisted generic approach for biological sample preparation and LC-MS/MS method validation.

    PubMed

    Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S

    2011-09-01

    Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.

  7. Validation of the 10/66 Dementia Research Group diagnostic assessment for dementia in Arabic: a study in Lebanon

    PubMed Central

    Phung, Kieu T. T.; Chaaya, Monique; Waldemar, Gunhild; Atweh, Samir; Asmar, Khalil; Ghusn, Husam; Karam, Georges; Sawaya, Raja; Khoury, Rose Mary; Zeinaty, Ibrahim; Salman, Sandrine; Hammoud, Salem; Radwan, Wael; Bassil, Nazem; Prince, Martin

    2014-01-01

    Objectives In the North Africa and Middle East region, the illiteracy rates among older people are high, posing a great challenge to cognitive assessment. Validated diagnostic instruments for dementia in Arabic are lacking, hampering the development of dementia research in the region. The study aimed at validating the Arabic version of the 10/66 Dementia Research Group (DRG) diagnostic assessment for dementia to determine if it is suitable for case ascertainment in epidemiological research. Methods 244 participants older than 65 years were included, 100 with normal cognition and 144 with mild to moderate dementia. Dementia was diagnosed by clinicians according to DSM-IV criteria. Depression was diagnosed using the Geriatric Mental State. Trained interviewers blind to the cognitive status of the participants administered the 10/66 DRG diagnostic assessment to the participants and interviewed the caregivers. The discriminatory ability of the 10/66 DRG assessment and its subcomponents were evaluated against the clinical diagnoses. Results Half of the participants had no formal education and 49% of them were depressed. The 10/66 DRG diagnostic assessment showed excellent sensitivity (92.0%), specificity (95.1%), positive predictive value (PPV, 92.9%), and low false positive rates (FPR) among controls with no formal education (8.1%) and depression (5.6%). Each subcomponent of the 10/66 DRG diagnostic assessment independently predicted dementia diagnosis. The predictive ability of the 10/66 DRG assessment was superior to that of its subcomponents. Conclusion 10/66 DRG diagnostic assessment for dementia is well suited for case ascertainment in epidemiological studies among Arabic speaking older population with high prevalence of illiteracy. PMID:24771602

  8. Fatty acid ethyl esters (FAEEs) as markers for alcohol in meconium: method validation and implementation of a screening program for prenatal drug exposure.

    PubMed

    Hastedt, Martin; Krumbiegel, Franziska; Gapert, René; Tsokos, Michael; Hartwig, Sven

    2013-09-01

    Alcohol consumption during pregnancy is a widespread problem and can cause severe fetal damage. As the diagnosis of fetal alcohol syndrome is difficult, the implementation of a reliable marker for alcohol consumption during pregnancy into meconium drug screening programs would be invaluable. A previously published gas chromatography mass spectrometry method for the detection of fatty acid ethyl esters (FAEEs) as alcohol markers in meconium was optimized and newly validated for a sample size of 50 mg. This method was applied to 122 cases from a drug-using population. The meconium samples were also tested for common drugs of abuse. In 73 % of the cases, one or more drugs were found. Twenty percent of the samples tested positive for FAEEs at levels indicating significant alcohol exposure. Consequently, alcohol was found to be the third most frequently abused substance within the study group. This re-validated method provides an increase in testing sensitivity, is reliable and easily applicable as part of a drug screening program. It can be used as a non-invasive tool to detect high alcohol consumption in the last trimester of pregnancy. The introduction of FAEEs testing in meconium screening was found to be of particular use in a drug-using population.

  9. Methyl group dynamics in paracetamol and acetanilide: probing the static properties of intermolecular hydrogen bonds formed by peptide groups

    NASA Astrophysics Data System (ADS)

    Johnson, M. R.; Prager, M.; Grimm, H.; Neumann, M. A.; Kearley, G. J.; Wilson, C. C.

    1999-06-01

    Measurements of tunnelling and librational excitations for the methyl group in paracetamol and tunnelling excitations for the methyl group in acetanilide are reported. In both cases, results are compared with molecular mechanics calculations, based on the measured low temperature crystal structures, which follow an established recipe. Agreement between calculated and measured methyl group observables is not as good as expected and this is attributed to the presence of comprehensive hydrogen bond networks formed by the peptide groups. Good agreement is obtained with a periodic quantum chemistry calculation which uses density functional methods, these calculations confirming the validity of the one-dimensional rotational model used and the crystal structures. A correction to the Coulomb contribution to the rotational potential in the established recipe using semi-emipircal quantum chemistry methods, which accommodates the modified charge distribution due to the hydrogen bonds, is investigated.

  10. Validation of Ion Chromatographic Method for Determination of Standard Inorganic Anions in Treated and Untreated Drinking Water

    NASA Astrophysics Data System (ADS)

    Ivanova, V.; Surleva, A.; Koleva, B.

    2018-06-01

    An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.

  11. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  12. PSI-Center Simulations of Validation Platform Experiments

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2013-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  13. Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated

    ERIC Educational Resources Information Center

    Morell, Linda; Tan, Rachael Jin Bee

    2009-01-01

    Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…

  14. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  15. Validation of an in-vivo proton beam range check method in an anthropomorphic pelvic phantom using dose measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bentefour, El H., E-mail: hassan.bentefour@iba-group.com; Prieels, Damien; Tang, Shikui

    Purpose: In-vivo dosimetry and beam range verification in proton therapy could play significant role in proton treatment validation and improvements. In-vivo beam range verification, in particular, could enable new treatment techniques one of which could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. This paper reports validation study of an in-vivo range verification method which can reduce the range uncertainty to submillimeter levels and potentially allow for in-vivo dosimetry. Methods: An anthropomorphic pelvic phantom is used to validate the clinical potential of the time-resolved dose method for range verification inmore » the case of prostrate treatment using range modulated anterior proton beams. The method uses a 3 × 4 matrix of 1 mm diodes mounted in water balloon which are read by an ADC system at 100 kHz. The method is first validated against beam range measurements by dose extinction measurements. The validation is first completed in water phantom and then in pelvic phantom for both open field and treatment field configurations. Later, the beam range results are compared with the water equivalent path length (WEPL) values computed from the treatment planning system XIO. Results: Beam range measurements from both time-resolved dose method and the dose extinction method agree with submillimeter precision in water phantom. For the pelvic phantom, when discarding two of the diodes that show sign of significant range mixing, the two methods agree with ±1 mm. Only a dose of 7 mGy is sufficient to achieve this result. The comparison to the computed WEPL by the treatment planning system (XIO) shows that XIO underestimates the protons beam range. Quantifying the exact XIO range underestimation depends on the strategy used to evaluate the WEPL results. To our best evaluation, XIO underestimates the treatment beam range between a minimum of 1.7% and maximum of 4.1%. Conclusions: Time

  16. Validation of an improved abnormality insertion method for medical image perception investigations

    NASA Astrophysics Data System (ADS)

    Madsen, Mark T.; Durst, Gregory R.; Caldwell, Robert T.; Schartz, Kevin M.; Thompson, Brad H.; Berbaum, Kevin S.

    2009-02-01

    The ability to insert abnormalities in clinical tomographic images makes image perception studies with medical images practical. We describe a new insertion technique and its experimental validation that uses complementary image masks to select an abnormality from a library and place it at a desired location. The method was validated using a 4-alternative forced-choice experiment. For each case, four quadrants were simultaneously displayed consisting of 5 consecutive frames of a chest CT with a pulmonary nodule. One quadrant was unaltered, while the other 3 had the nodule from the unaltered quadrant artificially inserted. 26 different sets were generated and repeated with order scrambling for a total of 52 cases. The cases were viewed by radiology staff and residents who ranked each quadrant by realistic appearance. On average, the observers were able to correctly identify the unaltered quadrant in 42% of cases, and identify the unaltered quadrant both times it appeared in 25% of cases. Consensus, defined by a majority of readers, correctly identified the unaltered quadrant in only 29% of 52 cases. For repeats, the consensus observer successfully identified the unaltered quadrant only once. We conclude that the insertion method can be used to reliably place abnormalities in perception experiments.

  17. Development and Validation of Stability Indicating Spectroscopic Method for Content Analysis of Ceftriaxone Sodium in Pharmaceuticals

    PubMed Central

    Ethiraj, Revathi; Thiruvengadam, Ethiraj; Sampath, Venkattapuram Saravanan; Vahid, Abdul; Raj, Jithin

    2014-01-01

    A simple, selective, and stability indicating spectroscopic method has been selected and validated for the assay of ceftriaxone sodium in the powder for injection dosage forms. Proposed method is based on the measurement of absorbance of ceftriaxone sodium in aqueous medium at 241 nm. The method obeys Beer's law in the range of 5–50 μg/mL with correlation coefficient of 0.9983. Apparent molar absorptivity and Sandell's sensitivity were found to be 2.046 × 103 L mol−1 cm−1 and 0.02732 μg/cm2/0.001 absorbance units. This study indicated that ceftriaxone sodium was degraded in acid medium and also underwent oxidative degradation. Percent relative standard deviation associated with all the validation parameters was less than 2, showing compliance with acceptance criteria of Q2 (R1), International Conference on Harmonization (2005) guidelines. Then the proposed method was successfully applied to the determination of ceftriaxone sodium in sterile preparation and results were comparable with reported methods. PMID:27355020

  18. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  19. A New Group-Formation Method for Student Projects

    ERIC Educational Resources Information Center

    Borges, Jose; Dias, Teresa Galvao; Cunha, Joao Falcao E.

    2009-01-01

    In BSc/MSc engineering programmes at Faculty of Engineering of the University of Porto (FEUP), the need to provide students with teamwork experiences close to a real world environment was identified as an important issue. A new group-formation method that aims to provide an enriching teamwork experience is proposed. Students are asked to answer a…

  20. The convergent validity between two objective methods for quantifying training load in young taekwondo athletes.

    PubMed

    Haddad, Monoem; Chaouachi, Anis; Castagna, Carlo; Wong, Del P; Chamari, Karim

    2012-01-01

    Various studies used objective heart rate (HR)-based methods to assess training load (TL). The common methods were Banister's Training Impulse (TRIMP; weights the duration using a weighting factor) and Edwards' TL (a summated HR zone score). Both the methods use the direct physiological measure of HR as a fundamental part of the calculation. To eliminate the redundancy of using various methods to quantify the same construct (i.e., TL), we have to verify if these methods are strongly convergent and are interchangeable. Therefore, the aim of this study was to investigate the convergent validity between Banister's TRIMP and Edwards' TL used for the assessment of internal TL. The HRs were recorded and analyzed during 10 training weeks of the preseason period in 10 male Taekwondo (TKD) athletes. The TL was calculated using Banister's TRIMP and Edwards' TL. Pearson product moment correlation coefficient was used to evaluate the convergent validity between the 2 methods for assessing TL. Very large to nearly perfect relationships were found between individual Banister's TRIMP and Edwards' TL (r values from 0.80 to 0.99; p < 0.001). Pooled Banister's TRIMP and pooled Edwards' TL (pooled data n = 284) were nearly largely correlated (r = 0.89; p < 0.05; 95% confidence interval: 0.86-0.91). In conclusion, these findings suggest that these 2 objective methods, measuring a similar construct, are interchangeable.

  1. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  2. Fisk-based criteria to support validation of detection methods for drinking water and air.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonell, M.; Bhattacharyya, M.; Finster, M.

    2009-02-18

    This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is

  3. Validity and reliability of Patient-Reported Outcomes Measurement Information System (PROMIS) Instruments in Osteoarthritis

    PubMed Central

    Broderick, Joan E.; Schneider, Stefan; Junghaenel, Doerte U.; Schwartz, Joseph E.; Stone, Arthur A.

    2013-01-01

    Objective Evaluation of known group validity, ecological validity, and test-retest reliability of four domain instruments from the Patient Reported Outcomes Measurement System (PROMIS) in osteoarthritis (OA) patients. Methods Recruitment of an osteoarthritis sample and a comparison general population (GP) through an Internet survey panel. Pain intensity, pain interference, physical functioning, and fatigue were assessed for 4 consecutive weeks with PROMIS short forms on a daily basis and compared with same-domain Computer Adaptive Test (CAT) instruments that use a 7-day recall. Known group validity (comparison of OA and GP), ecological validity (comparison of aggregated daily measures with CATs), and test-retest reliability were evaluated. Results The recruited samples matched (age, sex, race, ethnicity) the demographic characteristics of the U.S. sample for arthritis and the 2009 Census for the GP. Compliance with repeated measurements was excellent: > 95%. Known group validity for CATs was demonstrated with large effect sizes (pain intensity: 1.42, pain interference: 1.25, and fatigue: .85). Ecological validity was also established through high correlations between aggregated daily measures and weekly CATs (≥ .86). Test-retest validity (7-day) was very good (≥ .80). Conclusion PROMIS CAT instruments demonstrated known group and ecological validity in a comparison of osteoarthritis patients with a general population sample. Adequate test-retest reliability was also observed. These data provide encouraging initial data on the utility of these PROMIS instruments for clinical and research outcomes in osteoarthritis patients. PMID:23592494

  4. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol.

    PubMed

    Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-08-21

    The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article

  5. Method for Improving Mg Doping During Group-III Nitride MOCVD

    DOEpatents

    Creighton, J. Randall; Wang, George T.

    2008-11-11

    A method for improving Mg doping of Group III-N materials grown by MOCVD preventing condensation in the gas phase or on reactor surfaces of adducts of magnesocene and ammonia by suitably heating reactor surfaces between the location of mixing of the magnesocene and ammonia reactants and the Group III-nitride surface whereon growth is to occur.

  6. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  7. Validity and consistency assessment of accident analysis methods in the petroleum industry.

    PubMed

    Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza

    2017-11-17

    Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.

  8. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less

  9. Quantitative Tagless Copurification: A Method to Validate and Identify Protein-Protein Interactions

    DOE PAGES

    Shatsky, Maxim; Dong, Ming; Liu, Haichuan; ...

    2016-04-20

    Identifying protein-protein interactions (PPIs) at an acceptable false discovery rate (FDR) is challenging. Previously we identified several hundred PPIs from affinity purification - mass spectrometry (AP-MS) data for the bacteria Escherichia coli and Desulfovibrio vulgaris. These two interactomes have lower FDRs than any of the nine interactomes proposed previously for bacteria and are more enriched in PPIs validated by other data than the nine earlier interactomes. To more thoroughly determine the accuracy of ours or other interactomes and to discover further PPIs de novo, here we present a quantitative tagless method that employs iTRAQ MS to measure the copurification ofmore » endogenous proteins through orthogonal chromatography steps. 5273 fractions from a four-step fractionation of a D. vulgaris protein extract were assayed, resulting in the detection of 1242 proteins. Protein partners from our D. vulgaris and E. coli AP-MS interactomes copurify as frequently as pairs belonging to three benchmark data sets of well-characterized PPIs. In contrast, the protein pairs from the nine other bacterial interactomes copurify two- to 20-fold less often. We also identify 200 high confidence D. vulgaris PPIs based on tagless copurification and colocalization in the genome. These PPIs are as strongly validated by other data as our AP-MS interactomes and overlap with our AP-MS interactome for D.vulgaris within 3% of expectation, once FDRs and false negative rates are taken into account. Finally, we reanalyzed data from two quantitative tagless screens of human cell extracts. We estimate that the novel PPIs reported in these studies have an FDR of at least 85% and find that less than 7% of the novel PPIs identified in each screen overlap. Our results establish that a quantitative tagless method can be used to validate and identify PPIs, but that such data must be analyzed carefully to minimize the FDR.« less

  10. Quantitative Tagless Copurification: A Method to Validate and Identify Protein-Protein Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shatsky, Maxim; Dong, Ming; Liu, Haichuan

    Identifying protein-protein interactions (PPIs) at an acceptable false discovery rate (FDR) is challenging. Previously we identified several hundred PPIs from affinity purification - mass spectrometry (AP-MS) data for the bacteria Escherichia coli and Desulfovibrio vulgaris. These two interactomes have lower FDRs than any of the nine interactomes proposed previously for bacteria and are more enriched in PPIs validated by other data than the nine earlier interactomes. To more thoroughly determine the accuracy of ours or other interactomes and to discover further PPIs de novo, here we present a quantitative tagless method that employs iTRAQ MS to measure the copurification ofmore » endogenous proteins through orthogonal chromatography steps. 5273 fractions from a four-step fractionation of a D. vulgaris protein extract were assayed, resulting in the detection of 1242 proteins. Protein partners from our D. vulgaris and E. coli AP-MS interactomes copurify as frequently as pairs belonging to three benchmark data sets of well-characterized PPIs. In contrast, the protein pairs from the nine other bacterial interactomes copurify two- to 20-fold less often. We also identify 200 high confidence D. vulgaris PPIs based on tagless copurification and colocalization in the genome. These PPIs are as strongly validated by other data as our AP-MS interactomes and overlap with our AP-MS interactome for D.vulgaris within 3% of expectation, once FDRs and false negative rates are taken into account. Finally, we reanalyzed data from two quantitative tagless screens of human cell extracts. We estimate that the novel PPIs reported in these studies have an FDR of at least 85% and find that less than 7% of the novel PPIs identified in each screen overlap. Our results establish that a quantitative tagless method can be used to validate and identify PPIs, but that such data must be analyzed carefully to minimize the FDR.« less

  11. Development and Validation of GC-ECD Method for the Determination of Metamitron in Soil

    PubMed Central

    Tandon, Shishir; Kumar, Satyendra; Sand, N. K.

    2015-01-01

    This paper aims at developing and validating a convenient, rapid, and sensitive method for estimation of metamitron from soil samples.Determination andquantification was carried out by Gas Chromatography on microcapillary column with an Electron Capture Detector source. The compound was extracted from soil using methanol and cleanup by C-18 SPE. After optimization, the method was validated by evaluating the analytical curves, linearity, limits of detection, and quantification, precision (repeatability and intermediate precision), and accuracy (recovery). Recovery values ranged from 89 to 93.5% within 0.05- 2.0 µg L−1 with average RSD 1.80%. The precision (repeatability) ranged from 1.7034 to 1.9144% and intermediate precision from 1.5685 to 2.1323%. Retention time was 6.3 minutes, and minimum detectable and quantifiable limits were 0.02 ng mL−1 and 0.05 ng g−1, respectively. Good linearity (R 2 = 0.998) of the calibration curves was obtained over the range from 0.05 to 2.0 µg L−1. Results indicated that the developed method is rapid and easy to perform, making it applicable for analysis in large pesticide monitoring programmes. PMID:25733978

  12. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Tree regeneration response to the group selection method in southern Indiana

    Treesearch

    Dale R. Weigel; George R. Parker

    1997-01-01

    Tree regeneration response following the use of the group selection method was studied within 36 group openings on the Naval Surface Warfare Center, Crane Division in south central Indiana. Two different aspects and three time periods since cutting were examined. The objectives were to determine whether aspect, age, species group, location within the opening, or their...

  14. Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730

  15. Triacylglycerol secretion in rats: validation of a tracer method employing radioactive glycerol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bird, M.; Williams, M.A.; Baker, N.

    1984-10-01

    A two-compartment model was developed to analyze the temporal changes in plasma triacylglycerol (TG)-specific radioactivity after injection of (2-/sup 3/H)glycerol into rats. The analysis, which yielded fractional rate constants of TG secretion, was tested in rats fed diets either adequate or deficient in essential fatty acids (EFA) and containing either glucose, fructose or sucrose as the dietary carbohydrate. The method of analysis appeared valid, first, because of a close agreement between experimental and computer-fitted TG-specific radioactivity curves, and second, because the fractional rate constants obtained were quite similar to fractional rate constants determined previously by the Triton WR-1339 technique inmore » rats maintained on identical diets. The results show that EFA deficiency increased the fractional rate constant of TG secretion 1.7-, 1.8- and 3.3-fold and the rate of TG secretion 1.8-, 1.6- and 1.4-fold when the dietary carbohydrate was glucose, sucrose and fructose, respectively, in comparison with control rats fed diets supplying these same carbohydrates but adequate in EFA. In the latter groups, the rates of plasma TG secretion were in the range of 0.14-0.17 mg/min per 100 g body weight, and the rate of secretion in the fructose-fed rats was only 20% higher than in the glucose-fed rats.« less

  16. The Analysis Performance Method Naive Bayes Andssvm Determine Pattern Groups of Disease

    NASA Astrophysics Data System (ADS)

    Sitanggang, Rianto; Tulus; Situmorang, Zakarias

    2017-12-01

    Information is a very important element and into the daily needs of the moment, to get a precise and accurate information is not easy, this research can help decision makers and make a comparison. Researchers perform data mining techniques to analyze the performance of methods and algorithms naïve Bayes methods Smooth Support Vector Machine (ssvm) in the grouping of the disease.The pattern of disease that is often suffered by people in the group can be in the detection area of the collection of information contained in the medical record. Medical records have infromasi disease by patients in coded according to standard WHO. Processing of medical record data to find patterns of this group of diseases that often occur in this community take the attribute address, sex, type of disease, and age. Determining the next analysis is grouping of four ersebut attribute. From the results of research conducted on the dataset fever diabete mellitus, naïve Bayes method produces an average value of 99% and an accuracy and SSVM method produces an average value of 93% accuracy

  17. Reliability and Validity Testing of the Physical Resilience Measure

    ERIC Educational Resources Information Center

    Resnick, Barbara; Galik, Elizabeth; Dorsey, Susan; Scheve, Ann; Gutkin, Susan

    2011-01-01

    Objective: The purpose of this study was to test reliability and validity of the Physical Resilience Scale. Methods: A single-group repeated measure design was used and 130 older adults from three different housing sites participated. Participants completed the Physical Resilience Scale, Hardy-Gill Resilience Scale, 14-item Resilience Scale,…

  18. HPLC determination of plasma dimethylarginines: method validation and preliminary clinical application.

    PubMed

    Ivanova, Mariela; Artusi, Carlo; Boffa, Giovanni Maria; Zaninotto, Martina; Plebani, Mario

    2010-11-11

    Asymmetric dimethylarginine (ADMA) has been suggested as a possible marker of endothelial dysfunction, and interest in its use in clinical practice is increasing. However, the potential role of symmetric dimethylarginine (SDMA) as an endogenous marker of renal function, has been less widely investigated. The aims of the present study were therefore to determine reference values for dimethylarginines in plasma after method validation, and to ascertain ADMA plasma concentrations in patients with disorders characterized by endothelial dysfunction; a further end-point was to investigate the relationship between SDMA plasma concentrations and estimated GFR (eGFR) as well as plasmatic creatinine in patients with chronic kidney disease (CKD). HPLC with fluorescence detection was used for the determination of plasma dimethylarginines. To verify the clinical usefulness of ADMA and SDMA, values from 4 groups of patients at a high risk of cardiovascular complications as well renal dysfunction (chronic heart failure n=126; type II diabetes n=43; pulmonary arterial hypertension n=17; chronic kidney disease n=42) were evaluated, and compared with the reference values, obtained from 225 blood donors. The intra- and inter-assay CVs (<5.2%), the absolute and relative recoveries (96-106%) were highly satisfactory. ADMA levels were significantly elevated in all groups of patients compared with controls (p<0.001) with the exception of samples from patients with type II diabetes. SDMA levels were significantly elevated both in the patients with chronic kidney disease and in the patients with type II diabetes complicated by renal insufficiency, the values being closely correlated with both eGFR (R=0.740) and plasmatic creatinine (R=0.700). The findings made in the present study shows that ADMA levels are significantly increased in patients with diseases associated with endothelial dysfunction This molecule might, therefore, be used as a biochemical marker for the evaluation of

  19. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  20. Development and validation of an HPLC–MS/MS method to determine clopidogrel in human plasma

    PubMed Central

    Liu, Gangyi; Dong, Chunxia; Shen, Weiwei; Lu, Xiaopei; Zhang, Mengqi; Gui, Yuzhou; Zhou, Qinyi; Yu, Chen

    2015-01-01

    A quantitative method for clopidogrel using online-SPE tandem LC–MS/MS was developed and fully validated according to the well-established FDA guidelines. The method achieves adequate sensitivity for pharmacokinetic studies, with lower limit of quantifications (LLOQs) as low as 10 pg/mL. Chromatographic separations were performed on reversed phase columns Kromasil Eternity-2.5-C18-UHPLC for both methods. Positive electrospray ionization in multiple reaction monitoring (MRM) mode was employed for signal detection and a deuterated analogue (clopidogrel-d4) was used as internal standard (IS). Adjustments in sample preparation, including introduction of an online-SPE system proved to be the most effective method to solve the analyte back-conversion in clinical samples. Pooled clinical samples (two levels) were prepared and successfully used as real-sample quality control (QC) in the validation of back-conversion testing under different conditions. The result showed that the real samples were stable in room temperature for 24 h. Linearity, precision, extraction recovery, matrix effect on spiked QC samples and stability tests on both spiked QCs and real sample QCs stored in different conditions met the acceptance criteria. This online-SPE method was successfully applied to a bioequivalence study of 75 mg single dose clopidogrel tablets in 48 healthy male subjects. PMID:26904399

  1. Validated spectrophotometric method for the determination, spectroscopic characterization and thermal structural analysis of duloxetine with 1,2-naphthoquinone-4-sulphonate

    NASA Astrophysics Data System (ADS)

    Ulu, Sevgi Tatar; Elmali, Fikriye Tuncel

    2012-03-01

    A novel, selective, sensitive and simple spectrophotometric method was developed and validated for the determination of the antidepressant duloxetine hydrochloride in pharmaceutical preparation. The method was based on the reaction of duloxetine hydrochloride with 1,2-naphthoquinone-4-sulphonate (NQS) in alkaline media to yield orange colored product. The formation of this complex was also confirmed by UV-visible, FTIR, 1H NMR, Mass spectra techniques and thermal analysis. This method was validated for various parameters according to ICH guidelines. Beer's law is obeyed in a range of 5.0-60 μg/mL at the maximum absorption wavelength of 480 nm. The detection limit is 0.99 μg/mL and the recovery rate is in a range of 98.10-99.57%. The proposed methods was validated and applied to the determination of duloxetine hydrochloride in pharmaceutical preparation. The results were statistically analyzed and compared to those of a reference UV spectrophotometric method.

  2. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    PubMed Central

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155

  3. [Validity of expired carbon monoxide and urine cotinine using dipstick method to assess smoking status].

    PubMed

    Park, Su San; Lee, Ju Yul; Cho, Sung-Il

    2007-07-01

    We investigated the validity of the dipstick method (Mossman Associates Inc. USA) and the expired CO method to distinguish between smokers and nonsmokers. We also elucidated the related factors of the two methods. This study included 244 smokers and 50 ex-smokers, recruited from smoking cessation clinics at 4 local public health centers, who had quit for over 4 weeks. We calculated the sensitivity, specificity and Kappa coefficient of each method for validity. We obtained ROC curve, predictive value and agreement to determine the cutoff of expired air CO method. Finally, we elucidated the related factors and compared their effect powers using the standardized regression coefficient. The dipstick method showed a sensitivity of 92.6%, specificity of 96.0% and Kappa coefficient of 0.79. The best cutoff value to distinguish smokers was 5-6 ppm. At 5 ppm, the expired CO method showed a sensitivity of 94.3%, specificity of 82.0% and Kappa coefficient of 0.73. And at 6 ppm, sensitivity, specificity and Kappa coefficient were 88.5%, 86.0% and 0.64, respectively. Therefore, the dipstick method had higher sensitivity and specificity than the expired CO method. The dipstick and expired CO methods were significantly increased with increasing smoking amount. With longer time since the last smoking, expired CO showed a rapid decrease after 4 hours, whereas the dipstick method showed relatively stable levels for more than 4 hours. The dipstick and expired CO methods were both good indicators for assessing smoking status. However, the former showed higher sensitivity and specificity and stable levels over longer hours after smoking, compared to the expired CO method.

  4. Development and validation of stability indicating HPLC methods for related substances and assay analyses of amoxicillin and potassium clavulanate mixtures.

    PubMed

    Bellur Atici, Esen; Yazar, Yücel; Ağtaş, Çağan; Ridvanoğlu, Nurten; Karlığa, Bekir

    2017-03-20

    Antibacterial combinations consisting of the semisynthetic antibiotic amoxicillin (amox) and the β-lactamase inhibitor potassium clavulanate (clav) are commonly used and several chromatographic methods were reported for their quantification in mixtures. In the present work, single HPLC method for related substances analyses of amoxicillin and potassium clavulanate mixtures was developed and validated according to international conference on harmonization (ICH) guidelines. Eighteen amoxicillin and six potassium clavulanate impurities were successfully separated from each other by using triple gradient elution using a Thermo Hypersil Zorbax BDS C18 (250 mm×4.6mm, 3μm) column with 50μL injection volumes at a wavelength of 215nm. Commercially unavailable impurities were formed by degradation of amoxicillin and potassium clavulanate, identified by LC-MS studies and used during analytical method development and validation studies. Also, process related amoxicillin impurity-P was synthesized and characterized by using nuclear magnetic resonance (NMR) and mass spectroscopy (MS) for the first time. As complementary of this work, an assay method for amoxicillin and potassium clavulanate mixtures was developed and validated; stress-testing and stability studies of amox/clav mixtures was carried out under specified conditions according to ICH and analyzed by using validated stability-indicating assay and related substances methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Bridging the Gap Between Validation and Implementation of Non-Animal Veterinary Vaccine Potency Testing Methods

    PubMed Central

    Dozier, Samantha; Brown, Jeffrey; Currie, Alistair

    2011-01-01

    Simple Summary Many vaccines are tested for quality in experiments that require the use of large numbers of animals in procedures that often cause significant pain and distress. Newer technologies have fostered the development of vaccine quality control tests that reduce or eliminate the use of animals, but the availability of these newer methods has not guaranteed their acceptance by regulators or use by manufacturers. We discuss a strategic approach that has been used to assess and ultimately increase the use of non-animal vaccine quality tests in the U.S. and U.K. Abstract In recent years, technologically advanced high-throughput techniques have been developed that replace, reduce or refine animal use in vaccine quality control tests. Following validation, these tests are slowly being accepted for use by international regulatory authorities. Because regulatory acceptance itself has not guaranteed that approved humane methods are adopted by manufacturers, various organizations have sought to foster the preferential use of validated non-animal methods by interfacing with industry and regulatory authorities. After noticing this gap between regulation and uptake by industry, we began developing a paradigm that seeks to narrow the gap and quicken implementation of new replacement, refinement or reduction guidance. A systematic analysis of our experience in promoting the transparent implementation of validated non-animal vaccine potency assays has led to the refinement of our paradigmatic process, presented here, by which interested parties can assess the local regulatory acceptance of methods that reduce animal use and integrate them into quality control testing protocols, or ensure the elimination of peripheral barriers to their use, particularly for potency and other tests carried out on production batches. PMID:26486625

  6. Validity test of the IPD-Work consortium approach for creating comparable job strain groups between Job Content Questionnaire and Demand-Control Questionnaire.

    PubMed

    Choi, Bongkyoo; Ko, Sangbaek; Ostergren, Per-Olof

    2015-01-01

    This study aims to test the validity of the IPD-Work Consortium approach for creating comparable job strain groups between the Job Content Questionnaire (JCQ) and the Demand-Control Questionnaire (DCQ). A random population sample (N = 682) of all middle-aged Malmö males and females was given a questionnaire with the 14-item JCQ and 11-item DCQ for the job control and job demands. The JCQ job control and job demands scores were calculated in 3 different ways: using the 14-item JCQ standard scale formulas (method 1); dropping 3 job control items and using the 11-item JCQ standard scale formulas with additional scale weights (method 2); and the approach of the IPD Group (method 3), dropping 3 job control items, but using the simple 11-item summation-based scale formulas. The high job strain was defined as a combination of high demands and low control. Between the 2 questionnaires, false negatives for the high job strain were much greater than false positives (37-49% vs. 7-13%). When the method 3 was applied, the sensitivity of the JCQ for the high job strain against the DCQ was lowest (0.51 vs. 0.60-0.63 when the methods 1 and 2 were applied), although the specificity was highest (0.93 vs. 0.87-0.89 when the methods 1 and 2 were applied). The prevalence of the high job strain with the JCQ (the method 3 was applied) was considerably lower (4-7%) than with the JCQ (the methods 1 and 2 were applied) and the DCQ. The number of congruent cases for the high job strain between the 2 questionnaires was smallest when the method 3 was applied. The IPD-Work Consortium approach showed 2 major weaknesses to be used for epidemiological studies on the high job strain and health outcomes as compared to the standard JCQ methods: the greater misclassification of the high job strain and lower prevalence of the high job strain. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  7. Development plan for the External Hazards Experimental Group. Light Water Reactor Sustainability Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin Leigh; Smith, Curtis Lee; Burns, Douglas Edward

    This report describes the development plan for a new multi-partner External Hazards Experimental Group (EHEG) coordinated by Idaho National Laboratory (INL) within the Risk-Informed Safety Margin Characterization (RISMC) technical pathway of the Light Water Reactor Sustainability Program. Currently, there is limited data available for development and validation of the tools and methods being developed in the RISMC Toolkit. The EHEG is being developed to obtain high-quality, small- and large-scale experimental data validation of RISMC tools and methods in a timely and cost-effective way. The group of universities and national laboratories that will eventually form the EHEG (which is ultimately expectedmore » to include both the initial participants and other universities and national laboratories that have been identified) have the expertise and experimental capabilities needed to both obtain and compile existing data archives and perform additional seismic and flooding experiments. The data developed by EHEG will be stored in databases for use within RISMC. These databases will be used to validate the advanced external hazard tools and methods.« less

  8. Evaluation of a multiresidue method for measuring fourteen chemical groups of pesticides in water by use of LC-MS-MS.

    PubMed

    Carvalho, J J; Jerónimo, P C A; Gonçalves, C; Alpendurada, M F

    2008-11-01

    European Council Directive 98/83/EC on the quality of water intended for human consumption brought a new challenge for water-quality control routine laboratories, mainly on pesticides analysis. Under the guidelines of ISO/IEC 17025:2005, a multiresidue method was developed, validated, implemented in routine, and studied with real samples during a one-year period. The proposed method enables routine laboratories to handle a large number of samples, since 28 pesticides of 14 different chemical groups can be quantitated in a single procedure. The method comprises a solid-phase extraction step and subsequent analysis by liquid chromatography-mass spectrometry (LC-MS-MS). The accuracy was established on the basis of participation in interlaboratory proficiency tests, with encouraging results (majority |z-score| <2), and the precision was consistently analysed over one year. The limits of quantitation (below 0.050 microg L(-1)) are in agreement with the enforced threshold value for pesticides of 0.10 microg L(-1). Overall method performance is suitable for routine use according to accreditation rules, taking into account the data collected over one year.

  9. Meta-analysis and The Cochrane Collaboration: 20 years of the Cochrane Statistical Methods Group

    PubMed Central

    2013-01-01

    The Statistical Methods Group has played a pivotal role in The Cochrane Collaboration over the past 20 years. The Statistical Methods Group has determined the direction of statistical methods used within Cochrane reviews, developed guidance for these methods, provided training, and continued to discuss and consider new and controversial issues in meta-analysis. The contribution of Statistical Methods Group members to the meta-analysis literature has been extensive and has helped to shape the wider meta-analysis landscape. In this paper, marking the 20th anniversary of The Cochrane Collaboration, we reflect on the history of the Statistical Methods Group, beginning in 1993 with the identification of aspects of statistical synthesis for which consensus was lacking about the best approach. We highlight some landmark methodological developments that Statistical Methods Group members have contributed to in the field of meta-analysis. We discuss how the Group implements and disseminates statistical methods within The Cochrane Collaboration. Finally, we consider the importance of robust statistical methodology for Cochrane systematic reviews, note research gaps, and reflect on the challenges that the Statistical Methods Group faces in its future direction. PMID:24280020

  10. Development and validation of a stability-indicating capillary zone electrophoretic method for the assessment of entecavir and its correlation with liquid chromatographic methods.

    PubMed

    Dalmora, Sergio Luiz; Nogueira, Daniele Rubert; D'Avila, Felipe Bianchini; Souto, Ricardo Bizogne; Leal, Diogo Paim

    2011-01-01

    A stability-indicating capillary zone electrophoresis (CZE) method was validated for the analysis of entecavir in pharmaceutical formulations, using nimesulide as an internal standard. A fused-silica capillary (50 µm i.d.; effective length, 40 cm) was used while being maintained at 25°C; the applied voltage was 25 kV. A background electrolyte solution consisted of a 20 mM sodium tetraborate solution at pH 10. Injections were performed using a pressure mode at 50 mbar for 5 s, with detection at 216 nm. The specificity and stability-indicating capability were proven through forced degradation studies, evaluating also the in vitro cytotoxicity test of the degraded products. The method was linear over the concentration range of 1-200 µg mL(-1) (r(2) = 0.9999), and was applied for the analysis of entecavir in tablet dosage forms. The results were correlated to those of validated conventional and fast LC methods, showing non-significant differences (p > 0.05).

  11. Validation of a method to detect cocaine and its metabolites in nails by gas chromatography-mass spectrometry.

    PubMed

    Valente-Campos, Simone; Yonamine, Mauricio; de Moraes Moreau, Regina Lucia; Silva, Ovandir Alves

    2006-06-02

    The objective of the present work was to compare previously published methods and provide validation data to detect simultaneously cocaine (COC), benzoylecgonine (BE) and norcocaine (NCOC) in nail. Finger and toenail samples (5mg) were cut in very small pieces and submitted to an initial procedure for external decontamination. Methanol (3 ml) was used to release analytes from the matrix. A cleanup step was performed simultaneously by solid-phase extraction (SPE) and the residue was derivatized with pentafluoropropionic anhydride/pentafluoropropanol (PFPA/PFP). Gas chromatography-mass spectrometry (GC-MS) was used to detect the analytes in selected ion monitoring mode (SIM). Confidence parameters of validation of the method were: recovery, intra- and inter-assay precision, as well as limit of detection (LOD) of the analytes. The limits of detection were: 3.5 ng/mg for NCOC and 3.0 ng/mg for COC and BE. Good intra-assay precision was observed for all detected substances (coefficient of variation (CV)<11%). The inter-assay precision for norcocaine and benzoylecgonine were <4%. For intra- and inter-assay precision deuterated internal standards were used. Toenail and fingernail samples from eight declared cocaine users were submitted to the validated method.

  12. A new validated method for the simultaneous determination of benzocaine, propylparaben and benzyl alcohol in a bioadhesive gel by HPLC.

    PubMed

    Pérez-Lozano, P; García-Montoya, E; Orriols, A; Miñarro, M; Ticó, J R; Suñé-Negre, J M

    2005-10-04

    A new HPLC-RP method has been developed and validated for the simultaneous determination of benzocaine, two preservatives (propylparaben (nipasol) and benzyl alcohol) and degradation products of benzocaine in a semisolid pharmaceutical dosage form (benzocaine gel). The method uses a Nucleosil 120 C18 column and gradient elution. The mobile phase consisted of a mixture of methanol and glacial acetic acid (10%, v/v) at different proportion according to a time-schedule programme, pumped at a flow rate of 2.0 ml min(-1). The DAD detector was set at 258 nm. The validation study was carried out fulfilling the ICH guidelines in order to prove that the new analytical method, meets the reliability characteristics, and these characteristics showed the capacity of analytical method to keep, throughout the time, the fundamental criteria for validation: selectivity, linearity, precision, accuracy and sensitivity. The method was applied during the quality control of benzocaine gel in order to quantify the drug (benzocaine), preservatives and degraded products and proved to be suitable for rapid and reliable quality control method.

  13. The convergent and discriminant validity of burnout measures in sport: a multi-trait/multi-method analysis.

    PubMed

    Cresswell, Scott L; Eklund, Robert C

    2006-02-01

    Athlete burnout research has been hampered by the lack of an adequate measurement tool. The Athlete Burnout Questionnaire (ABQ) and the Maslach Burnout Inventory General Survey (MBI-GS) are two recently developed self-report instruments designed to assess burnout. The convergent and discriminant validity of the ABQ and MBI-GS were assessed through multi-trait/multi-method analysis with a sporting population. Overall, the ABQ and the MBI-GS displayed acceptable convergent validity with matching subscales highly correlated, and satisfactory internal discriminant validity with lower correlations between non-matching subscales. Both scales also indicated an adequate discrimination between the concepts of burnout and depression. These findings add support to previous findings in non-sporting populations that depression and burnout are separate constructs. Based on the psychometric results, construct validity analysis and practical considerations, the results support the use of the ABQ to assess athlete burnout.

  14. PREFACE: XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP) (Group30)

    NASA Astrophysics Data System (ADS)

    Brackx, Fred; De Schepper, Hennie; Van der Jeugt, Joris

    2015-04-01

    The XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP), also known as the Group30 conference, took place in Ghent (Belgium) from Monday 14 to Friday 18 July 2014. The conference was organised by Ghent University (Department of Applied Mathematics, Computer Science and Statistics, and Department of Mathematical Analysis). The website http://www.group30.ugent.be is still available. The ICGTMP is one of the traditional conference series covering the most important topics of symmetry which are relevant to the interplay of present-day mathematics and physics. More than 40 years ago a group of enthusiasts, headed by H. Bacry of Marseille and A. Janner of Nijmegen, initiated a series of annual meetings with the aim to provide a common forum for scientists interested in group theoretical methods. At that time most of the participants belonged to two important communities: on the one hand solid state specialists, elementary particle theorists and phenomenologists, and on the other mathematicians eager to apply newly-discovered group and algebraic structures. The conference series has become a meeting point for scientists working at modelling physical phenomena through mathematical and numerical methods based on geometry and symmetry. It is considered as the oldest one among the conference series devoted to geometry and physics. It has been further broadened and diversified due to the successful applications of geometric and algebraic methods in life sciences and other areas. The first four meetings took place alternatively in Marseille and Nijmegen. Soon after, the conference acquired an international standing, especially following the 1975 colloquium in Nijmegen and the 1976 colloquium in Montreal. Since then it has been organized in many places around the world. It has become a bi-annual colloquium since 1990, the year it was organized in Moscow. This was the first time the colloquium took place in Belgium. There were 246 registered

  15. Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.

    PubMed

    Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto

    2018-05-01

    This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Validating the Vocabulary Levels Test with Fourth and Fifth Graders to Identify Students At-Risk in Vocabulary Development Using a Quasiexperimental Single Group Design

    ERIC Educational Resources Information Center

    Dunn, Suzanna

    2012-01-01

    This quasiexperimental single group design study investigated the validity of the Vocabulary Levels Test (VLT) to identify fourth and fifth grade students who are at-risk in vocabulary development. The subjects of the study were 88 fourth and fifth grade students at one elementary school in Washington State. The Group Reading Assessment and…

  17. A Known Group Analysis Validity Study of the Vanderbilt Assessment of Leadership in Education in US Elementary and Secondary Schools

    ERIC Educational Resources Information Center

    Covay Minor, Elizabeth; Porter, Andrew C.; Murphy, Joseph; Goldring, Ellen B.; Cravens, Xiu; Elloitt, Stephen N.

    2014-01-01

    The Vanderbilt Assessment of Leadership in Education (VAL-ED) provides educators with a tool for principal evaluation based on principal, teacher, and supervisor reports of principals' learning-centered leadership. In this study, we conduct a known group analysis as part of a larger argument for the validity of the VAL-ED in US elementary and…

  18. Validated modified Lycopodium spore method development for standardisation of ingredients of an ayurvedic powdered formulation Shatavaryadi churna.

    PubMed

    Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer

    2013-01-01

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.

  19. Raman fiber-optical method for colon cancer detection: Cross-validation and outlier identification approach

    NASA Astrophysics Data System (ADS)

    Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.

    2017-06-01

    Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.

  20. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.