Science.gov

Sample records for purpose validated method

  1. The validation of methods for regulatory purposes in the control of residues.

    PubMed

    Gowik, Petra

    2009-11-13

    The topic of validation is diversified. This review outlines the validation strategies which can be found in national, international and supranational regulations, compares them with one another and aims to elaborate on the main principles. European regulations and legislation, Codex alimentarius guidelines, the official methods program of the AOAC, and naturally the relevant ISO standards, particularly the ISO 5725 series, are taken into consideration. The objective of every validation is to demonstrate fitness for purpose. This varies of course in its characteristics for the diverse uses. However, all approaches have in common the objective of harmonisation of food control by using effective and reliable methods. To this end, criteria are determined and validation models developed and made compulsory. ISO 5725 is the central basis for validations for quantitative methods with its validation specifications through method collaborative studies. On the contrary, there are no valid uniform international method specifications for qualitative methods. Collaborative studies are in opposition to single-lab-validations with different sources of error. Whereas laboratory errors are predominant in collaborative studies, the single-lab-validation or in-house validation concentrates particularly on time and processing errors (intermediate precision). In new statistical models for in-house validations, the matrix mismatch error is also considered. The validation models presented here are of a general nature and can be used in principle for all analytical methods. Correct and appropriate statistical modelling is very important.

  2. Validity for What Purpose?

    ERIC Educational Resources Information Center

    Shepard, Lorrie A.

    2013-01-01

    Background/Context: The evolution of validity understandings from mid-century to now has emphasized that test validity depends on test purpose--adding consequence considerations to issues of interpretation and evidentiary warrants. Purpose: To consider the tensions created by multiple purposes for assessment and sketch briefly how we got to where…

  3. Validity for What Purpose?

    ERIC Educational Resources Information Center

    Shepard, Lorrie A.

    2013-01-01

    Background/Context: The evolution of validity understandings from mid-century to now has emphasized that test validity depends on test purpose--adding consequence considerations to issues of interpretation and evidentiary warrants. Purpose: To consider the tensions created by multiple purposes for assessment and sketch briefly how we got to where…

  4. Critical analysis of several analytical method validation strategies in the framework of the fit for purpose concept.

    PubMed

    Bouabidi, A; Rozet, E; Fillet, M; Ziemons, E; Chapuzet, E; Mertens, B; Klinkenberg, R; Ceccato, A; Talbi, M; Streel, B; Bouklouze, A; Boulanger, B; Hubert, Ph

    2010-05-07

    Analytical method validation is a mandatory step at the end of the development in all analytical laboratories. It is a highly regulated step of the life cycle of a quantitative analytical method. However, even if some documents have been published there is a lack of clear guidance for the methodology to follow to adequately decide when a method can be considered as valid. This situation has led to the availability of several methodological approaches and it is therefore the responsibility of the analyst to choose the best one. The classical decision processes encountered during method validation evaluation are compared, namely the descriptive, difference and equivalence approaches. Furthermore a validation approach using accuracy profile computed by means of beta-expectation tolerance interval and total measurement error is also available. In the present paper all of these different validation approaches were applied to the validation of two analytical methods. The evaluation of the producer and consumer risks by Monte Carlo simulations were also made in order to compare the appropriateness of these various approaches. The classical methodologies give rise to inadequate and contradictory conclusions which do not allow them to answer adequately the objective of method validation, i.e. to give enough guarantees that each of the future results that will be generated by the method during routine use will be close enough to the true value. It is found that the validation methodology which gives the most guarantees with regards to the reliability or adequacy of the decision to consider a method as valid is the one based on the use of the accuracy profile.

  5. Validity of a simple videogrammetric method to measure the movement of all hand segments for clinical purposes.

    PubMed

    Sancho-Bru, Joaquín L; Jarque-Bou, Néstor J; Vergara, Margarita; Pérez-González, Antonio

    2014-02-01

    Hand movement measurement is important in clinical, ergonomics and biomechanical fields. Videogrammetric techniques allow the measurement of hand movement without interfering with the natural hand behaviour. However, an accurate measurement of the hand movement requires the use of a high number of markers, which limits its applicability for the clinical practice (60 markers would be needed for hand and wrist). In this work, a simple method that uses a reduced number of markers (29), based on a simplified kinematic model of the hand, is proposed and evaluated. A set of experiments have been performed to evaluate the errors associated with the kinematic simplification, together with the evaluation of its accuracy, repeatability and reproducibility. The global error attributed to the kinematic simplification was 6.68°. The method has small errors in repeatability and reproducibility (3.43° and 4.23°, respectively) and shows no statistically significant difference with the use of electronic goniometers. The relevance of the work lies in the ability of measuring all degrees of freedom of the hand with a reduced number of markers without interfering with the natural hand behaviour, which makes it suitable for its use in clinical applications, as well as for ergonomic and biomechanical purposes.

  6. Quality Control Analytical Methods: Method Validation.

    PubMed

    Klang, Mark G; Williams, LaVonn A

    2016-01-01

    To properly determine the accuracy of a pharmaceutical product or compounded preparation, tests must be designed specifically for that evaluation. The procedures selected must be verified through a process referred to as method validation, an integral part of any good analytical practice. The results from a method validation procedure can be used to judge the quality, reliability, and consistency of analytical results. The purpose of this article is to deliver the message of the importance of validation of a pharmaceutical product or compounded preparation and to briefly discuss the results of a lack of such validation. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  7. Fit for purpose validated method for the determination of the strontium isotopic signature in mineral water samples by multi-collector inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Brach-Papa, Christophe; Van Bocxstaele, Marleen; Ponzevera, Emmanuel; Quétel, Christophe R.

    2009-03-01

    A robust method allowing the routine determination of n( 87Sr)/ n( 86Sr) with at least five significant decimal digits for large sets of mineral water samples is described. It is based on 2 consecutive chromatographic separations of Sr associated to multi-collector inductively coupled plasma mass spectrometry (MC-ICPMS) measurements. Separations are performed using commercial pre-packed columns filled with "Sr resin" to overcome isobaric interferences affecting the determination of strontium isotope ratios. The careful method validation scheme applied is described. It included investigations on all parameters influencing both chromatographic separations and MC-ICPMS measurements, and also the test on a synthetic sample made of an aliquot of the NIST SRM 987 certified reference material dispersed in a saline matrix to mimic complex samples. Correction for mass discrimination was done internally using the n( 88Sr)/ n( 86Sr) ratio. For comparing mineral waters originating from different geological backgrounds or identifying counterfeits, calculations involved the well known consensus value (1/0.1194) ± 0 as reference. The typical uncertainty budget estimated for these results was 40 'ppm' relative ( k = 2). It increased to 150 'ppm' ( k = 2) for the establishment of stand alone results, taking into account a relative difference of about 126 'ppm' systematically observed between measured and certified values of the NIST SRM 987. In case there was suspicion of a deviation of the n( 88Sr)/ n( 86Sr) ratio (worst case scenario) our proposal was to use the NIST SRM 987 value 8.37861 ± 0.00325 ( k = 2) as reference, and assign a typical relative uncertainty budget of 300 'ppm' ( k = 2). This method is thus fit for purpose and was applied to eleven French samples.

  8. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  9. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  10. External Validity in Policy Evaluations that Choose Sites Purposively.

    PubMed

    Olsen, Robert B; Orr, Larry L; Bell, Stephen H; Stuart, Elizabeth A

    2013-01-01

    Evaluations of the impact of social programs are often carried out in multiple "sites," such as school districts, housing authorities, local TANF offices, or One-Stop Career Centers. Most evaluations select sites purposively following a process that is nonrandom. Unfortunately, purposive site selection can produce a sample of sites that is not representative of the population of interest for the program. In this paper, we propose a conceptual model of purposive site selection. We begin with the proposition that a purposive sample of sites can usefully be conceptualized as a random sample of sites from some well-defined population, for which the sampling probabilities are unknown and vary across sites. This proposition allows us to derive a formal, yet intuitive, mathematical expression for the bias in the pooled impact estimate when sites are selected purposively. This formula helps us to better understand the consequences of selecting sites purposively, and the factors that contribute to the bias. Additional research is needed to obtain evidence on how large the bias tends to be in actual studies that select sites purposively, and to develop methods to increase the external validity of these studies.

  11. Construct Validity in Formative Assessment: Purpose and Practices

    ERIC Educational Resources Information Center

    Rix, Samantha

    2012-01-01

    This paper examines the utilization of construct validity in formative assessment for classroom-based purposes. Construct validity pertains to the notion that interpretations are made by educators who analyze test scores during formative assessment. The purpose of this paper is to note the challenges that educators face when interpreting these…

  12. Simple validated LC-MS/MS method for the determination of atropine and scopolamine in plasma for clinical and forensic toxicological purposes.

    PubMed

    Koželj, Gordana; Perharič, Lucija; Stanovnik, Lovro; Prosen, Helena

    2014-08-05

    A liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the determination of atropine and scopolamine in 100μL human plasma was developed and validated. Sample pretreatment consisted of protein precipitation with acetonitrile followed by a concentration step. Analytes and levobupivacaine (internal standard) were separated on a Zorbax XDB-CN column (75mm×4.6mm i.d., 3.5μm) with gradient elution (purified water, acetonitrile, formic acid). The triple quadrupole MS was operated in ESI positive mode. Matrix effect was estimated for deproteinised plasma samples. Selected reaction monitoring (SRM) was used for quantification in the range of 0.10-50.00ng/mL. Interday precision for both tropanes and intraday precision for atropine was <10%, intraday precision for scopolamine was <14% and <18% at lower limit of quantification (LLOQ). Mean interday and intraday accuracies for atropine were within ±7% and for scopolamine within ±11%. The method can be used for determination of therapeutic and toxic levels of both compounds and has been successfully applied to a study of pharmacodynamic and pharmacokinetic properties of tropanes, where plasma samples of volunteers were collected at fixed time intervals after ingestion of a buckwheat meal, spiked with five low doses of tropanes.

  13. Homework Purpose Scale for Middle School Students: A Validation Study

    ERIC Educational Resources Information Center

    Xu, Jianzhong

    2011-01-01

    The purpose of the present study is to test the validity of scores on the Homework Purpose Scale (HPS) for middle school students. The participants were 1,181 eighth graders in the southeastern United States, including (a) 699 students in urban school districts and (b) 482 students in rural school districts. First, confirmatory factor analysis was…

  14. VAN method lacks validity

    NASA Astrophysics Data System (ADS)

    Jackson, David D.; Kagan, Yan Y.

    Varotsos and colleagues (the VAN group) claim to have successfully predicted many earthquakes in Greece. Several authors have refuted these claims, as reported in the May 27,1996, special issue of Geophysical Research Letters and a recent book, A Critical Review of VAN [Lighthill 1996]. Nevertheless, the myth persists. Here we summarize why the VAN group's claims lack validity.The VAN group observes electrical potential differences that they call “seismic electric signals” (SES) weeks before and hundreds of kilometers away from some earthquakes, claiming that SES are somehow premonitory. This would require that increases in stress or decreases in strength cause the electrical variations, or that some regional process first causes the electrical signals and then helps trigger the earthquakes. Here we adopt their notation SES to refer to the electrical variations, without accepting any link to the quakes.

  15. Validation of rapid microbiological methods.

    PubMed

    Peris-Vicente, Juan; Carda-Broch, Samuel; Esteve-Romero, Josep

    2015-06-01

    Classical microbiological methods currently have unacceptably long cycle times. Rapid microbiological methods have been available on the market for decades and have been applied by the clinical and food industries. However, their implementation in the pharmaceutical industry has been hampered by stringent regulations on validation and comparison with classical methods. To encourage the implementation of these methodologies, they must be validated to assess that the results are straightforward. A comparison with traditional methods should be also performed. In this review, information about the validation of rapid microbiological methods reported in the literature is provided as well as an explanation of the difficulty of validation of these methods. A comparison with traditional methods is also discussed. This information is useful for industries and laboratories that can potentially implement these methods. © 2014 Society for Laboratory Automation and Screening.

  16. Toward a Unified Validation Framework in Mixed Methods Research

    ERIC Educational Resources Information Center

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  17. Toward a Unified Validation Framework in Mixed Methods Research

    ERIC Educational Resources Information Center

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  18. Quality by design compliant analytical method validation.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-01-03

    The concept of quality by design (QbD) has recently been adopted for the development of pharmaceutical processes to ensure a predefined product quality. Focus on applying the QbD concept to analytical methods has increased as it is fully integrated within pharmaceutical processes and especially in the process control strategy. In addition, there is the need to switch from the traditional checklist implementation of method validation requirements to a method validation approach that should provide a high level of assurance of method reliability in order to adequately measure the critical quality attributes (CQAs) of the drug product. The intended purpose of analytical methods is directly related to the final decision that will be made with the results generated by these methods under study. The final aim for quantitative impurity assays is to correctly declare a substance or a product as compliant with respect to the corresponding product specifications. For content assays, the aim is similar: making the correct decision about product compliance with respect to their specification limits. It is for these reasons that the fitness of these methods should be defined, as they are key elements of the analytical target profile (ATP). Therefore, validation criteria, corresponding acceptance limits, and method validation decision approaches should be settled in accordance with the final use of these analytical procedures. This work proposes a general methodology to achieve this in order to align method validation within the QbD framework and philosophy. β-Expectation tolerance intervals are implemented to decide about the validity of analytical methods. The proposed methodology is also applied to the validation of analytical procedures dedicated to the quantification of impurities or active product ingredients (API) in drug substances or drug products, and its applicability is illustrated with two case studies.

  19. Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation

    ERIC Educational Resources Information Center

    Van Duzer, Eric

    2012-01-01

    The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…

  20. External Validity in Policy Evaluations That Choose Sites Purposively

    ERIC Educational Resources Information Center

    Olsen, Robert B.; Orr, Larry L.; Bell, Stephen H.; Stuart, Elizabeth A.

    2013-01-01

    Evaluations of the impact of social programs are often carried out in multiple sites, such as school districts, housing authorities, local TANF offices, or One-Stop Career Centers. Most evaluations select sites purposively following a process that is nonrandom. Unfortunately, purposive site selection can produce a sample of sites that is not…

  1. External Validity in Policy Evaluations That Choose Sites Purposively

    ERIC Educational Resources Information Center

    Olsen, Robert B.; Orr, Larry L.; Bell, Stephen H.; Stuart, Elizabeth A.

    2013-01-01

    Evaluations of the impact of social programs are often carried out in multiple sites, such as school districts, housing authorities, local TANF offices, or One-Stop Career Centers. Most evaluations select sites purposively following a process that is nonrandom. Unfortunately, purposive site selection can produce a sample of sites that is not…

  2. Are analysts doing method validation in liquid chromatography?

    PubMed

    Ruiz-Angel, M J; García-Alvarez-Coque, M C; Berthod, A; Carda-Broch, S

    2014-08-01

    Method validation is being applied in the reported analytical methods for decades. Even before this protocol was defined, authors already somehow validated their methods without full awareness. They wished to assure the quality of their work. Validation is an applied approach to verify that a method is suitable and rugged enough to function as a quality control tool in different locations and times. The performance parameters and statistical protocols followed throughout a validation study vary with the source of guidelines. Before single laboratory validation, an analytical method should be fully developed and optimized. The purpose of the validation is to confirm performance parameters that are determined during method development, and it should provide information on how the method will perform under routine use. An unstable method may require re-validation. Further method development and optimization will be needed if validation results do not meet the accepted performance standards. When possible, the validation protocol should also be conducted as a collaborative study by multiple laboratories, on different instruments, reagents, and standards. At this point, it would be interesting to know how people are validating their methods. Are they evaluating all defined validation parameters? Are they indicating the followed guidelines? Is re-validation really currently used? Is validation performed by a single laboratory, or is it a collaborative work by several laboratories? Is it an evolving discipline? In this survey, we will try to answer these questions focused to the field of liquid chromatography.

  3. Advances in validation, risk and uncertainty assessment of bioanalytical methods.

    PubMed

    Rozet, E; Marini, R D; Ziemons, E; Boulanger, B; Hubert, Ph

    2011-06-25

    Bioanalytical method validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application in order to trust the critical decisions that will be made with them. Even if several guidelines exist to help perform bioanalytical method validations, there is still the need to clarify the meaning and interpretation of bioanalytical method validation criteria and methodology. Yet, different interpretations can be made of the validation guidelines as well as for the definitions of the validation criteria. This will lead to diverse experimental designs implemented to try fulfilling these criteria. Finally, different decision methodologies can also be interpreted from these guidelines. Therefore, the risk that a validated bioanalytical method may be unfit for its future purpose will depend on analysts personal interpretation of these guidelines. The objective of this review is thus to discuss and highlight several essential aspects of methods validation, not only restricted to chromatographic ones but also to ligand binding assays owing to their increasing role in biopharmaceutical industries. The points that will be reviewed are the common validation criteria, which are selectivity, standard curve, trueness, precision, accuracy, limits of quantification and range, dilutional integrity and analyte stability. Definitions, methodology, experimental design and decision criteria are reviewed. Two other points closely connected to method validation are also examined: incurred sample reproducibility testing and measurement uncertainty as they are highly linked to bioanalytical results reliability. Their additional implementation is foreseen to strongly reduce the risk of having validated a bioanalytical method unfit for its purpose.

  4. Expert validation of fit-for-purpose guidelines for designing programmes of assessment

    PubMed Central

    2012-01-01

    Background An assessment programme, a purposeful mix of assessment activities, is necessary to achieve a complete picture of assessee competence. High quality assessment programmes exist, however, design requirements for such programmes are still unclear. We developed guidelines for design based on an earlier developed framework which identified areas to be covered. A fitness-for-purpose approach defining quality was adopted to develop and validate guidelines. Methods First, in a brainstorm, ideas were generated, followed by structured interviews with 9 international assessment experts. Then, guidelines were fine-tuned through analysis of the interviews. Finally, validation was based on expert consensus via member checking. Results In total 72 guidelines were developed and in this paper the most salient guidelines are discussed. The guidelines are related and grouped per layer of the framework. Some guidelines were so generic that these are applicable in any design consideration. These are: the principle of proportionality, rationales should underpin each decisions, and requirement of expertise. Logically, many guidelines focus on practical aspects of assessment. Some guidelines were found to be clear and concrete, others were less straightforward and were phrased more as issues for contemplation. Conclusions The set of guidelines is comprehensive and not bound to a specific context or educational approach. From the fitness-for-purpose principle, guidelines are eclectic, requiring expertise judgement to use them appropriately in different contexts. Further validation studies to test practicality are required. PMID:22510502

  5. Validation Methods for Direct Writing Assessment.

    ERIC Educational Resources Information Center

    Miller, M. David; Crocker, Linda

    1990-01-01

    This review of methods for validating writing assessments was conceptualized within a framework suggested by S. Messick (1989) that included five operational components of construct validation: (1) content representativeness; (2) structural fidelity; (3) nomological validity; (4) criterion-related validity; and (5) nomothetic span. (SLD)

  6. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures.

  7. Development of a systematic computer vision-based method to analyse and compare images of false identity documents for forensic intelligence purposes-Part I: Acquisition, calibration and validation issues.

    PubMed

    Auberson, Marie; Baechler, Simon; Zasso, Michaël; Genessay, Thibault; Patiny, Luc; Esseiva, Pierre

    2016-03-01

    Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be

  8. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  9. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology.

  10. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  11. ASTM Validates Air Pollution Test Methods

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  12. ASTM Validates Air Pollution Test Methods

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  13. A Practical Guide to Immunoassay Method Validation.

    PubMed

    Andreasson, Ulf; Perret-Liaudet, Armand; van Waalwijk van Doorn, Linda J C; Blennow, Kaj; Chiasserini, Davide; Engelborghs, Sebastiaan; Fladby, Tormod; Genc, Sermin; Kruse, Niels; Kuiperij, H Bea; Kulic, Luka; Lewczuk, Piotr; Mollenhauer, Brit; Mroczko, Barbara; Parnetti, Lucilla; Vanmechelen, Eugeen; Verbeek, Marcel M; Winblad, Bengt; Zetterberg, Henrik; Koel-Simmelink, Marleen; Teunissen, Charlotte E

    2015-01-01

    Biochemical markers have a central position in the diagnosis and management of patients in clinical medicine, and also in clinical research and drug development, also for brain disorders, such as Alzheimer's disease. The enzyme-linked immunosorbent assay (ELISA) is frequently used for measurement of low-abundance biomarkers. However, the quality of ELISA methods varies, which may introduce both systematic and random errors. This urges the need for more rigorous control of assay performance, regardless of its use in a research setting, in clinical routine, or drug development. The aim of a method validation is to present objective evidence that a method fulfills the requirements for its intended use. Although much has been published on which parameters to investigate in a method validation, less is available on a detailed level on how to perform the corresponding experiments. To remedy this, standard operating procedures (SOPs) with step-by-step instructions for a number of different validation parameters is included in the present work together with a validation report template, which allow for a well-ordered presentation of the results. Even though the SOPs were developed with the intended use for immunochemical methods and to be used for multicenter evaluations, most of them are generic and can be used for other technologies as well.

  14. Determination of lead and cadmium in seawater by differential pulse anodic stripping voltammetry: fit-for-purpose partial validation and internal quality aspects.

    PubMed

    Bisetty, K; Gumede, N J; Escuder-Gilabert, L; Sagrado, S

    2008-09-01

    The main thrust of this work involves method validation, quality control and sample uncertainty estimations related to the determination of cadmium and lead in marine water by anodic stripping voltammetry. We have followed a step-by-step protocol to evaluate and harmonize the internal quality aspects of this method. Such protocol involves a statement of the method's scope (analytes, matrices, concentration level) and requisites (external and/or internal); selection of the method's (fit-for-purpose) features; prevalidation and validation of the intermediate accuracy (under intermediate precision conditions) and its assessment (by Monte Carlo simulation); validation of other required features of the method (if applicable); and a validity statement in terms of a "fit-for-purpose" decision, harmonized validation-control-uncertainty statistics (the "u-approach") and short-term routine work (with the aim of proposing virtually "ready-to-use" methods).

  15. Biomarker method validation in anticancer drug development.

    PubMed

    Cummings, J; Ward, T H; Greystoke, A; Ranson, M; Dive, C

    2008-02-01

    Over recent years the role of biomarkers in anticancer drug development has expanded across a spectrum of applications ranging from research tool during early discovery to surrogate endpoint in the clinic. However, in Europe when biomarker measurements are performed on samples collected from subjects entered into clinical trials of new investigational agents, laboratories conducting these analyses become subject to the Clinical Trials Regulations. While these regulations are not specific in their requirements of research laboratories, quality assurance and in particular assay validation are essential. This review, therefore, focuses on a discussion of current thinking in biomarker assay validation. Five categories define the majority of biomarker assays from 'absolute quantitation' to 'categorical'. Validation must therefore take account of both the position of the biomarker in the spectrum towards clinical end point and the level of quantitation inherent in the methodology. Biomarker assay validation should be performed ideally in stages on 'a fit for purpose' basis avoiding unnecessarily dogmatic adherence to rigid guidelines but with careful monitoring of progress at the end of each stage. These principles are illustrated with two specific examples: (a) absolute quantitation of protein biomarkers by mass spectrometry and (b) the M30 and M65 ELISA assays as surrogate end points of cell death.

  16. Validation of qualitative microbiological test methods.

    PubMed

    IJzerman-Boon, Pieta C; van den Heuvel, Edwin R

    2015-01-01

    This paper considers a statistical model for the detection mechanism of qualitative microbiological test methods with a parameter for the detection proportion (the probability to detect a single organism) and a parameter for the false positive rate. It is demonstrated that the detection proportion and the bacterial density cannot be estimated separately, not even in a multiple dilution experiment. Only the product can be estimated, changing the interpretation of the most probable number estimator. The asymptotic power of the likelihood ratio statistic for comparing an alternative method with the compendial method, is optimal for a single dilution experiment. The bacterial density should either be close to two CFUs per test unit or equal to zero, depending on differences in the model parameters between the two test methods. The proposed strategy for method validation is to use these two dilutions and test for differences in the two model parameters, addressing the validation parameters specificity and accuracy. Robustness of these two parameters might still be required, but all other validation parameters can be omitted. A confidence interval-based approach for the ratio of the detection proportions for the two methods is recommended, since it is most informative and close to the power of the likelihood ratio test.

  17. Validation of an alternative microbiological method for tissue products.

    PubMed

    Suessner, Susanne; Hennerbichler, Simone; Schreiberhuber, Stefanie; Stuebl, Doris; Gabriel, Christian

    2014-06-01

    According to the European Pharmacopoeia sterility testing of products includes an incubation time of 14 days in thioglycollate medium and soya-bean casein medium. In this case a large period of time is needed for product testing. So we designed a study to evaluate an alternative method for sterility testing. The aim of this study was to reduce the incubation time for the routinely produced products in our tissue bank (cornea and amnion grafts) by obtaining the same detection limit, accurateness and recovery rates as the reference method described in the European Pharmacopoeia. The study included two steps of validation. Primary validation compared the reference method with the alternative method. Therefore eight bacterial and two fungi test strains were tested at their preferred milieu. A geometric dilution series from 10 to 0.625 colony forming unit per 10 ml culture media was used. Subsequent to the evaluation the second part of the study started including the validation of the fertility of the culture media and the parallel testing of the two methods by investigating products. For this purpose two product batches were tested in three independent runs. Concerning the validation we could not find any aberration between the alternative and the reference method. In addition, the recovery rate of each microorganism was between 83.33 and 100 %. The alternative method showed non-inferiority regarding accuracy to the reference method. Due to this study we reduced the sterility testing for cornea and amniotic grafts to 9 days.

  18. Expert validation of fit-for-purpose guidelines for designing programmes of assessment.

    PubMed

    Dijkstra, Joost; Galbraith, Robert; Hodges, Brian D; McAvoy, Pauline A; McCrorie, Peter; Southgate, Lesley J; Van der Vleuten, Cees P M; Wass, Val; Schuwirth, Lambert W T

    2012-04-17

    An assessment programme, a purposeful mix of assessment activities, is necessary to achieve a complete picture of assessee competence. High quality assessment programmes exist, however, design requirements for such programmes are still unclear. We developed guidelines for design based on an earlier developed framework which identified areas to be covered. A fitness-for-purpose approach defining quality was adopted to develop and validate guidelines. First, in a brainstorm, ideas were generated, followed by structured interviews with 9 international assessment experts. Then, guidelines were fine-tuned through analysis of the interviews. Finally, validation was based on expert consensus via member checking. In total 72 guidelines were developed and in this paper the most salient guidelines are discussed. The guidelines are related and grouped per layer of the framework. Some guidelines were so generic that these are applicable in any design consideration. These are: the principle of proportionality, rationales should underpin each decisions, and requirement of expertise. Logically, many guidelines focus on practical aspects of assessment. Some guidelines were found to be clear and concrete, others were less straightforward and were phrased more as issues for contemplation. The set of guidelines is comprehensive and not bound to a specific context or educational approach. From the fitness-for-purpose principle, guidelines are eclectic, requiring expertise judgement to use them appropriately in different contexts. Further validation studies to test practicality are required.

  19. Validation and verification of measurement methods in clinical chemistry.

    PubMed

    Theodorsson, Elvar

    2012-02-01

    The present overview of validation and verification procedures in clinical chemistry focuses on the use of harmonized concepts and nomenclature, fitness-for-purpose evaluations and procedures for minimizing overall measurement and diagnostic uncertainty. The need for mutually accepted validation procedures in all fields of bioanalysis becomes obvious when they implement international accreditation and certification standards or their equivalents. The guide on bioanalytical method validation published by the US FDA in 2001 represents a sensible compromise between thoroughness and cost-effectiveness. Lacking comprehensive international agreements in the field, this document has also been successfully adapted in other fields of bioanalysis. European and international efforts aiming for consensus in the entire field of bioanalysis are currently being made. Manufacturers of highly automated in vitro diagnostic methods provide the majority of measurement methods used in unmodified in clinical chemistry. Validated by the manufacturers for their intended use and fitness-for-purpose, they need to be verified in the circumstances of the end-users. As yet, there is unfortunately no general agreement on the extent of the verification procedures needed.

  20. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  1. Alternative methods for ocular toxicology testing: validation, applications and troubleshooting.

    PubMed

    Dholakiya, Sanjay L; Barile, Frank A

    2013-06-01

    Humanitarian concern, scientific progress and legislative action have lead to the development, validation and regulatory acceptance of alternative in vitro ocular models. However, to date not a single in vitro alternative ocular toxicity test has been validated as a full replacement for the in vivo Draize rabbit eye test for all classes of chemicals across whole irritancy ranges. Since the 1990s, ocular alternative methods have been validated but few have been accepted for regulatory purposes. These assays include: organotypic models, such as the bovine corneal opacity and permeability (BCOP) assay, the isolated chicken eye (ICE) test method and cell function-based in vitro assays, such as the cytosensor microphysiometer (CM) and the fluorescein leakage (FL) test methods. Some refinements to in vivo testing methods have been accepted by regulatory agencies, including humane endpoints to avoid or minimize pain and distress. The authors provide a review of the background, protocol overview, applications and their validation status of the tier-testing approach. Furthermore, the authors provide expert analysis and provide their perspective on this approach and potential future developments. In the search for a battery of methods that replaces the in vivo Draize test, it is necessary to prioritize techniques, define related mechanisms and justify statistical approaches. Overall, only when the reliability and relevance of a method is unequivocally supported will any technique be ready for regulatory acceptance.

  2. Fundamentals of population pharmacokinetic modelling: validation methods.

    PubMed

    Sherwin, Catherine M T; Kiang, Tony K L; Spigarelli, Michael G; Ensom, Mary H H

    2012-09-01

    Population pharmacokinetic modelling is widely used within the field of clinical pharmacology as it helps to define the sources and correlates of pharmacokinetic variability in target patient populations and their impact upon drug disposition; and population pharmacokinetic modelling provides an estimation of drug pharmacokinetic parameters. This method's defined outcome aims to understand how participants in population pharmacokinetic studies are representative of the population as opposed to the healthy volunteers or highly selected patients in traditional pharmacokinetic studies. This review focuses on the fundamentals of population pharmacokinetic modelling and how the results are evaluated and validated. This review defines the common aspects of population pharmacokinetic modelling through a discussion of the literature describing the techniques and placing them in the appropriate context. The concept of validation, as applied to population pharmacokinetic models, is explored focusing on the lack of consensus regarding both terminology and the concept of validation itself. Population pharmacokinetic modelling is a powerful approach where pharmacokinetic variability can be identified in a target patient population receiving a pharmacological agent. Given the lack of consensus on the best approaches in model building and validation, sound fundamentals are required to ensure the selected methodology is suitable for the particular data type and/or patient population. There is a need to further standardize and establish the best approaches in modelling so that any model created can be systematically evaluated and the results relied upon.

  3. Validation methods for flight crucial systems

    NASA Technical Reports Server (NTRS)

    Holt, H. M.

    1983-01-01

    Research to develop techniques that can aid in determining the reliability and performance of digital electronic fault-tolerant systems, that have probability of catastrophic system failure on the order of 10 to the -9th at 10 hours, is reviewed. The computer-aided reliability estimation program (CARE III) provides general-purpose reliability analysis and a design tool for fault-tolerant systems; large reduction of state size; and a fault-handling model based on probabilistic description of detection, isolation, and recovery mechanisms. The application of design proof techniques as part of the design and development of the software implemented fault-tolerance computer is mentioned. Emulation techniques and experimental procedures are verified using specimens of fault-tolerant computers and the capabilities of the validation research laboratory, AIRLAB.

  4. Validation methods for flight crucial systems

    NASA Technical Reports Server (NTRS)

    Holt, H. M.

    1983-01-01

    Research to develop techniques that can aid in determining the reliability and performance of digital electronic fault-tolerant systems, that have probability of catastrophic system failure on the order of 10 to the -9th at 10 hours, is reviewed. The computer-aided reliability estimation program (CARE III) provides general-purpose reliability analysis and a design tool for fault-tolerant systems; large reduction of state size; and a fault-handling model based on probabilistic description of detection, isolation, and recovery mechanisms. The application of design proof techniques as part of the design and development of the software implemented fault-tolerance computer is mentioned. Emulation techniques and experimental procedures are verified using specimens of fault-tolerant computers and the capabilities of the validation research laboratory, AIRLAB.

  5. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  6. Fit for purpose? Validation of a conceptual framework for personal recovery with current mental health consumers.

    PubMed

    Bird, Victoria; Leamy, Mary; Tew, Jerry; Le Boutillier, Clair; Williams, Julie; Slade, Mike

    2014-07-01

    Mental health services in the UK, Australia and other Anglophone countries have moved towards supporting personal recovery as a primary orientation. To provide an empirically grounded foundation to identify and evaluate recovery-oriented interventions, we previously published a conceptual framework of personal recovery based on a systematic review and narrative synthesis of existing models. Our objective was to test the validity and relevance of this framework for people currently using mental health services. Seven focus groups were conducted with 48 current mental health consumers in three NHS trusts across England, as part of the REFOCUS Trial. Consumers were asked about the meaning and their experience of personal recovery. Deductive and inductive thematic analysis applying a constant comparison approach was used to analyse the data. The analysis aimed to explore the validity of the categories within the conceptual framework, and to highlight any areas of difference between the conceptual framework and the themes generated from new data collected from the focus groups. Both the inductive and deductive analysis broadly validated the conceptual framework, with the super-ordinate categories Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment (CHIME) evident in the analysis. Three areas of difference were, however, apparent in the inductive analysis. These included practical support; a greater emphasis on issues around diagnosis and medication; and scepticism surrounding recovery. This study suggests that the conceptual framework of personal recovery provides a defensible theoretical base for clinical and research purposes which is valid for use with current consumers. However, the three areas of difference further stress the individual nature of recovery and the need for an understanding of the population and context under investigation. © The Royal Australian and New Zealand College of Psychiatrists 2014.

  7. Establishing the Content Validity of Tests Designed To Serve Multiple Purposes: Bridging Secondary-Postsecondary Mathematics.

    ERIC Educational Resources Information Center

    Burstein, Leigh; And Others

    A method is presented for determining the content validity of a series of secondary school mathematics tests. These tests are part of the Mathematics Diagnostic Testing Project (MDTP), a collaborative effort by California university systems to develop placement examinations and a means to document student preparation in mathematics. Content…

  8. Validation of analytical methods involved in dissolution assays: acceptance limits and decision methodologies.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-11-02

    Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization.

  9. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  10. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  11. Purpose and methods of a Pollution Prevention Awareness Program

    SciTech Connect

    Flowers, P.A.; Irwin, E.F.; Poligone, S.E.

    1994-08-15

    The purpose of the Pollution Prevention Awareness Program (PPAP), which is required by DOE Order 5400.1, is to foster the philosophy that prevention is superior to remediation. The goal of the program is to incorporate pollution prevention into the decision-making process at every level throughout the organization. The objectives are to instill awareness, disseminate information, provide training and rewards for identifying the true source or cause of wastes, and encourage employee participation in solving environmental issues and preventing pollution. PPAP at the Oak Ridge Y-12 Plant was created several years ago and continues to grow. We believe that we have implemented several unique methods of communicating environmental awareness to promote a more active work force in identifying ways of reducing pollution.

  12. Evaluating regional vulnerability to climate change: purposes and methods

    SciTech Connect

    Malone, Elizabeth L.; Engle, Nathan L.

    2011-03-15

    As the emphasis in climate change research, international negotiations, and developing-country activities has shifted from mitigation to adaptation, vulnerability has emerged as a bridge between impacts on one side and the need for adaptive changes on the other. Still, the term vulnerability remains abstract, its meaning changing with the scale, focus, and purpose of each assessment. Understanding regional vulnerability has advanced over the past several decades, with studies using a combination of indicators, case studies and analogues, stakeholder-driven processes, and scenario-building methodologies. As regions become increasingly relevant scales of inquiry for bridging the aggregate and local, for every analysis, it is perhaps most appropriate to ask three “what” questions: “What/who is vulnerable?,” “What is vulnerability?,” and “Vulnerable to what?” The answers to these questions will yield different definitions of vulnerability as well as different methods for assessing it.

  13. Implementing a tiered approach to bioanalytical method validation for large-molecule ligand-binding assay methods in pharmacokinetic assessments.

    PubMed

    Watson, Rebecca G; Clements-Egan, Adrienne; Schantz, Allen; Ware, Mark; Wu, Bonnie; Yang, Tong-Yuan; Shankar, Gopi; Marini, Joseph C

    2017-09-01

    Bioanalytical methods must enable the delivery of data that meet sound, scientifically justified, fit-for-purpose criteria. At early phases of biotherapeutic drug development, suitable criteria of a ligand-binding assay could be met for pharmacokinetic (PK) in-study sample testing without a full validation defined by regulatory guidelines. To ensure fit-for-purpose methods support PK testing through all phases of biotherapeutic development, three tiers of method validation - regulatory, scientific and research validations - are proposed. The three-tiered framework for method validation outlines the differences in the parameters that should be assessed, the acceptance criteria that may be applied, and the documentation necessary at each level. The criteria for selecting the appropriate application of each of these PK method validation workflows are discussed.

  14. Softcopy quality ruler method: implementation and validation

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying

    2009-01-01

    A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.

  15. Validation of the total organic carbon (TOC) swab sampling and test method.

    PubMed

    Glover, Chris

    2006-01-01

    For cleaning validation purposes, the combination of swab sampling and the total organic carbon (TOC) test method provides a useful mechanism to monitor the cleanliness of equipment surfaces. The TOC test method is an ideal choice for monitoring carbon-containing residuals. Sample and test method validation "TOC Swabbing Method Validation" BV-000-BC-078-01, Bayer Healthcare, proved quantifiable recovery of albumin down to 25 microg. The validation characteristics included accuracy, repeatability and intermediate precision, specificity, linearity, assay range, detection and quantitation limit, and robustness.

  16. Validation of NUBF correction method for DPR

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Seto, S.; Shimozuma, T.

    2015-12-01

    Dual-frequency Precipitation Radar (DPR) is currently operational, and we are aiming to realize a high observation accuracy than the TRMM / PR. DPR algorithm has left a lot of research. Non-uniform Beam Filiing (NUBF) among them seems to be an important issue in order to improve the observation accuracy. According to Iguchi et.al (2009), the NUBF correction which has been introduced into TRMM / PR algorithms, Path Integrated Attenuation (PIA) is then assumed to be gamma distribution, and calculates the correction parameters. In this study, the validity of the NUBF correction using the gamma distribution was verified using ground based radar data. In addition, it was also verification of NUBF correction method using the footprint of KaPR that have been added in the DPR algorithm. Verification data used the high resolution ground based radar data than the TRMM / PR and DPR. This is to obtain detailed precipitation echo data in the observation footprint. Using this data, we calculated the coefficient of variation sn of PIA in the observation footprint. The value of sn calculated by the observation footprint was true value in this study. Estimation method of sn is two-fold. One method of calculating the sn is considered as a fractal shapes the 3 × 3 pixels including the observation footprint contained in TRMM / PR Algorithm, the other is the footprint of KaPR closer to the observation footprint (2 × 2 pixels). Result of the verification, the NUBF correction method TRMM / PR Algorithm (3 × 3 pixels), it was found to be overestimated as compared to sn of the true value. Moreover, the sn estimation using the footprint (2 × 2 pixels) of KaPR, the difference between the true values becomes smaller. Furthermore, as compared with the case of using the correction method TRMM / PR algorithm, sn estimate was found to be better. The results, in NUBF correction techniques on DPR, we propose the use of footprint KaPR (2 × 2 pixels).

  17. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  18. FDIR Strategy Validation with the B Method

    NASA Astrophysics Data System (ADS)

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  19. Validation of a Cost-Efficient Multi-Purpose SNP Panel for Disease Based Research

    PubMed Central

    Hou, Liping; Phillips, Christopher; Azaro, Marco; Brzustowicz, Linda M.; Bartlett, Christopher W.

    2011-01-01

    Background Here we present convergent methodologies using theoretical calculations, empirical assessment on in-house and publicly available datasets as well as in silico simulations, that validate a panel of SNPs for a variety of necessary tasks in human genetics disease research before resources are committed to larger-scale genotyping studies on those samples. While large-scale well-funded human genetic studies routinely have up to a million SNP genotypes, samples in a human genetics laboratory that are not yet part of such studies may be productively utilized in pilot projects or as part of targeted follow-up work though such smaller scale applications require at least some genome-wide genotype data for quality control purposes such as DNA “barcoding” to detect swaps or contamination issues, determining familial relationships between samples and correcting biases due to population effects such as population stratification in pilot studies. Principal Findings Empirical performance in classification of relative types for any two given DNA samples (e.g., full siblings, parental, etc) indicated that for outbred populations the panel performs sufficiently to classify relationship in extended families and therefore also for smaller structures such as trios and for twin zygosity testing. Additionally, familial relationships do not significantly diminish the (mean match) probability of sharing SNP genotypes in pedigrees, further indicating the uniqueness of the “barcode.” Simulation using these SNPs for an African American case-control disease association study demonstrated that population stratification, even in complex admixed samples, can be adequately corrected under a range of disease models using the SNP panel. Conclusion The panel has been validated for use in a variety of human disease genetics research tasks including sample barcoding, relationship verification, population substructure detection and statistical correction. Given the ease of genotyping

  20. Methods for developing and validating survivability distributions

    SciTech Connect

    Williams, R.L.

    1993-10-01

    A previous report explored and discussed statistical methods and procedures that may be applied to validate the survivability of a complex system of systems that cannot be tested as an entity. It described a methodology where Monte Carlo simulation was used to develop the system survivability distribution from the component distributions using a system model that registers the logical interactions of the components to perform system functions. This paper discusses methods that can be used to develop the required survivability distributions based upon three sources of knowledge. These are (1) available test results; (2) little or no available test data, but a good understanding of the physical laws and phenomena which can be applied by computer simulation; and (3) neither test data nor adequate knowledge of the physics are known, in which case, one must rely upon, and quantify, the judgement of experts. This paper describes the relationship between the confidence bounds that can be placed on survivability and the number of tests conducted. It discusses the procedure for developing system level survivability distributions from the distributions for lower levels of integration. It demonstrates application of these techniques by defining a communications network for a Hypothetical System Architecture. A logic model for the performance of this communications network is developed, as well as the survivability distributions for the nodes and links based on two alternate data sets, reflecting the effects of increased testing of all elements. It then shows how this additional testing could be optimized by concentrating only on those elements contained in the low-order fault sets which the methodology identifies.

  1. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes.

  2. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  3. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  4. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the

  5. Validation of a Theoretical Model of Diagnostic Classroom Assessment: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Koh, Nancy

    2012-01-01

    The purpose of the study was to validate a theoretical model of diagnostic, formative classroom assessment called, "Proximal Assessment for Learner Diagnosis" (PALD). To achieve its purpose, the study employed a two-stage, mixed-methods design. The study utilized multiple data sources from 11 elementary level mathematics teachers who…

  6. Is it really necessary to validate an analytical method or not? That is the question.

    PubMed

    Rambla-Alegre, Maria; Esteve-Romero, Josep; Carda-Broch, Samuel

    2012-04-06

    Method validation is an important requirement in the practice of chemical analysis. However, awareness of its importance, why it should be done and when, and exactly what needs to be done, seems to be poor amongst analytical chemists. Much advice related to method validation already exists in the literature, especially related to particular methods, but more often than not is underused. Some analysts see method validation as something that can only be done by collaborating with other laboratories and therefore do not go about it. In addition, analysts' understanding of method validation is inhibited by the fact that many of the technical terms used in the processes for evaluating methods vary in different sectors of analytical measurement, both in terms of their meaning and the way they are determined. Validation applies to a defined protocol, for the determination of a specified analyte and range of concentrations in a particular type of test material, used for a specified purpose. In general, validation should check that the method performs adequately for the purpose throughout the range of analyte concentrations and test materials to which it is applied. It follows that these features, together with a statement of any fitness-for-purpose criteria, should be completely specified before any validation takes place.

  7. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  8. [Validation and regulatory acceptance of alternative methods for toxicity evaluation].

    PubMed

    Ohno, Yasuo

    2004-01-01

    For regulatory acceptance of alternative methods (AMs) to animal toxicity tests, their reproducibility and relevance should be determined by intra- and inter-laboratory validation. Appropriate procedures of the validation and regulatory acceptance of AMs were recommended by OECD in 1996. According to those principles, several in vitro methods like skin corrosivity tests and phototoxicity tests were evaluated and accepted by ECVAM (European Center for the Validation of Alternative Methods), ICCVAM (The Interagency Coordinating Committee on the Validation of Alternative Methods), and OECD. Because of the difficulties in conducting inter-laboratory validation and relatively short period remained until EU's ban of animal experiments for safety evaluation of cosmetics, ECVAM and ICCVAM have recently started cooperation in validation and evaluation of AMs. It is also necessary to establish JaCVAM (Japanese Center for the Validation of AM) to contribute the issue and for the evaluation of new toxicity tests originated in Japan.

  9. A clinical method for identifying scapular dyskinesis, part 2: validity.

    PubMed

    Tate, Angela R; McClure, Philip; Kareha, Stephen; Irwin, Dominic; Barbe, Mary F

    2009-01-01

    Although clinical methods for detecting scapular dyskinesis have been described, evidence supporting the validity of these methods is lacking. To determine the validity of the scapular dyskinesis test, a visually based method of identifying abnormal scapular motion. A secondary purpose was to explore the relationship between scapular dyskinesis and shoulder symptoms. Validation study comparing 3-dimensional measures of scapular motion among participants clinically judged as having either normal motion or scapular dyskinesis. University athletic training facilities. A sample of 142 collegiate athletes (National Collegiate Athletic Association Division I and Division III) participating in sports requiring overhead use of the arm was rated, and 66 of these underwent 3-dimensional testing. Volunteers were viewed by 2 raters while performing weighted shoulder flexion and abduction. The right and left sides were rated independently as normal, subtle dyskinesis, or obvious dyskinesis using the scapular dyskinesis test. Symptoms were assessed using the Penn Shoulder Score. Athletes judged as having either normal motion or obvious dyskinesis underwent 3-dimensional electromagnetic kinematic testing while performing the same movements. The kinematic data from both groups were compared via multifactor analysis of variance with post hoc testing using the least significant difference procedure. The relationship between symptoms and scapular dyskinesis was evaluated by odds ratios. Differences were found between the normal and obvious dyskinesis groups. Participants with obvious dyskinesis showed less scapular upward rotation (P < .001), less clavicular elevation (P < .001), and greater clavicular protraction (P = .044). The presence of shoulder symptoms was not different between the normal and obvious dyskinesis volunteers (odds ratio = 0.79, 95% confidence interval = 0.33, 1.89). Shoulders visually judged as having dyskinesis showed distinct alterations in 3-dimensional

  10. The Value of Qualitative Methods in Social Validity Research

    ERIC Educational Resources Information Center

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  11. The Value of Qualitative Methods in Social Validity Research

    ERIC Educational Resources Information Center

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  12. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  13. Estimates of External Validity Bias When Impact Evaluations Select Sites Purposively

    ERIC Educational Resources Information Center

    Stuart, Elizabeth A.; Olsen, Robert B.; Bell, Stephen H.; Orr, Larry L.

    2012-01-01

    While there has been some increasing interest in external validity, most work to this point has been in assessing the similarity of a randomized trial sample and a population of interest (e.g., Stuart et al., 2010; Tipton, 2011). The goal of this research is to calculate empirical estimates of the external validity bias in educational intervention…

  14. Development and Validation of a Reading-Related Assessment Battery in Malay for the Purpose of Dyslexia Assessment

    ERIC Educational Resources Information Center

    Lee, Lay Wah

    2008-01-01

    Malay is an alphabetic language with transparent orthography. A Malay reading-related assessment battery which was conceptualised based on the International Dyslexia Association definition of dyslexia was developed and validated for the purpose of dyslexia assessment. The battery consisted of ten tests: Letter Naming, Word Reading, Non-word…

  15. Purpose in Life in Emerging Adulthood: Development and Validation of a New Brief Measure.

    PubMed

    Hill, Patrick L; Edmonds, Grant W; Peterson, Missy; Luyckx, Koen; Andrews, Judy A

    2016-05-01

    Accruing evidence points to the value of studying purpose in life across adolescence and emerging adulthood. Research though is needed to understand the unique role of purpose in life in predicting well-being and developmentally relevant outcomes during emerging adulthood. The current studies (total n = 669) found support for the development of a new brief measure of purpose in life using data from American and Canadian samples, while demonstrating evidence for two important findings. First, purpose in life predicted well-being during emerging adulthood, even when controlling for the Big Five personality traits. Second, purpose in life was positively associated with self-image and negatively associated with delinquency, again controlling for personality traits. Findings are discussed with respect to how studying purpose in life can help understand which individuals are more likely to experience positive transitions into adulthood.

  16. Construct validity comparisons of three methods for measuring patient compliance.

    PubMed Central

    Cummings, K M; Kirscht, J P; Becker, M H; Levin, N W

    1984-01-01

    A multitrait - multimethod design was employed to assess the construct validity of three commonly used methods for assessing patient compliance: physiological assessments (e.g., blood chemistries), ratings by health professionals, and patient self-reports. Subjects were patients receiving ambulatory hemodialysis treatments for end-stage renal disease, whose regimen required them to take medications, to follow dietary restrictions, and to limit fluid intake. Study findings indicated that of the three methods examined, the nurse rating approach was the most valid (although it contained only about 50 percent valid variance). Measures derived from physiological assessments contained a substantial proportion of residual error (over 70 percent), and the patient self-report method contained only about 12 percent valid variance (with about 18 percent method-effects variance, and 68 percent residual-error variance). These results make clear the need for additional research directed at developing valid methods for evaluating patient compliance behaviors. PMID:6724950

  17. Hidden Populations, Online Purposive Sampling, and External Validity: Taking off the Blindfold

    ERIC Educational Resources Information Center

    Barratt, Monica J.; Ferris, Jason A.; Lenton, Simon

    2015-01-01

    Online purposive samples have unknown biases and may not strictly be used to make inferences about wider populations, yet such inferences continue to occur. We compared the demographic and drug use characteristics of Australian ecstasy users from a probability (National Drug Strategy Household Survey, n = 726) and purposive sample (online survey…

  18. Exploring valid and reliable assessment methods for care management education.

    PubMed

    Gennissen, Lokke; Stammen, Lorette; Bueno-de-Mesquita, Jolien; Wieringa, Sietse; Busari, Jamiu

    2016-07-04

    Purpose It is assumed that the use of valid and reliable assessment methods can facilitate the development of medical residents' management and leadership competencies. To justify this assertion, the perceptions of an expert panel of health care leaders were explored on assessment methods used for evaluating care management (CM) development in Dutch residency programs. This paper aims to investigate how assessors and trainees value these methods and examine for any inherent benefits or shortcomings when they are applied in practice. Design/methodology/approach A Delphi survey was conducted among members of the platform for medical leadership in The Netherlands. This panel of experts was made up of clinical educators, practitioners and residents interested in CM education. Findings Of the respondents, 40 (55.6 per cent) and 31 (43 per cent) participated in the first and second rounds of the Delphi survey, respectively. The respondents agreed that assessment methods currently being used to measure residents' CM competencies were weak, though feasible for use in many residency programs. Multi-source feedback (MSF, 92.1 per cent), portfolio/e-portfolio (86.8 per cent) and knowledge testing (76.3 per cent) were identified as the most commonly known assessment methods with familiarity rates exceeding 75 per cent. Practical implications The findings suggested that an "assessment framework" comprising MSF, portfolios, individual process improvement projects or self-reflections and observations in clinical practice should be used to measure CM competencies in residents. Originality/value This study reaffirms the need for objective methods to assess CM skills in post-graduate medical education, as there was not a single assessment method that stood out as the best instrument.

  19. Bioanalytical method validation: An updated review

    PubMed Central

    Tiwari, Gaurav; Tiwari, Ruchi

    2010-01-01

    The development of sound bioanalytical method(s) is of paramount importance during the process of drug discovery and development, culminating in a marketing approval. The objective of this paper is to review the sample preparation of drug in biological matrix and to provide practical approaches for determining selectivity, specificity, limit of detection, lower limit of quantitation, linearity, range, accuracy, precision, recovery, stability, ruggedness, and robustness of liquid chromatographic methods to support pharmacokinetic (PK), toxicokinetic, bioavailability, and bioequivalence studies. Bioanalysis, employed for the quantitative determination of drugs and their metabolites in biological fluids, plays a significant role in the evaluation and interpretation of bioequivalence, PK, and toxicokinetic studies. Selective and sensitive analytical methods for quantitative evaluation of drugs and their metabolites are critical for the successful conduct of pre-clinical and/or biopharmaceutics and clinical pharmacology studies. PMID:23781413

  20. Bioanalytical method validation: An updated review.

    PubMed

    Tiwari, Gaurav; Tiwari, Ruchi

    2010-10-01

    The development of sound bioanalytical method(s) is of paramount importance during the process of drug discovery and development, culminating in a marketing approval. The objective of this paper is to review the sample preparation of drug in biological matrix and to provide practical approaches for determining selectivity, specificity, limit of detection, lower limit of quantitation, linearity, range, accuracy, precision, recovery, stability, ruggedness, and robustness of liquid chromatographic methods to support pharmacokinetic (PK), toxicokinetic, bioavailability, and bioequivalence studies. Bioanalysis, employed for the quantitative determination of drugs and their metabolites in biological fluids, plays a significant role in the evaluation and interpretation of bioequivalence, PK, and toxicokinetic studies. Selective and sensitive analytical methods for quantitative evaluation of drugs and their metabolites are critical for the successful conduct of pre-clinical and/or biopharmaceutics and clinical pharmacology studies.

  1. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  2. AOAC validation of qualitative and quantitative methods for microbiology in foods. Association of Official Agricultural Chemists.

    PubMed

    De Smedt, J M

    1998-11-24

    The purpose of AOAC International is promoting quality measurements and methods validation in the analytical sciences. The actual work of developing and testing methods is done by a network of AOAC members and volunteers. Validation of the methods is established by the AOAC Official Methods Program. The objective of this program is to provide analytical methods for which performance characteristics have been validated to the highest degree of confidence through an independent, multiple laboratory collaborative study. The performance characteristics for quantitative microbiological methods include repeatability, reproducibility and critical relative difference, while the characteristics for qualitative methods are sensitivity and specificity. The Official Methods Program is illustrated by a practical example of a collaborative study through which salmonella detection by motility enrichment on Modified Semi-solid Rappaport-Vassiliadis (MSRV) medium was adopted as an Official Method.

  3. Modified cross-validation as a method for estimating parameter

    NASA Astrophysics Data System (ADS)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  4. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  5. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines.

  6. External validation of a Cox prognostic model: principles and methods.

    PubMed

    Royston, Patrick; Altman, Douglas G

    2013-03-06

    A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model.

  7. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  8. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  9. Validation of Rapid Radiochemical Method for Californium ...

    EPA Pesticide Factsheets

    Technical Brief In the event of a radiological/nuclear contamination event, the response community would need tools and methodologies to rapidly assess the nature and the extent of contamination. To characterize a radiologically contaminated outdoor area and to inform risk assessment, large numbers of environmental samples would be collected and analyzed over a short period of time. To address the challenge of quickly providing analytical results to the field, the U.S. EPA developed a robust analytical method. This method allows response officials to characterize contaminated areas and to assess the effectiveness of remediation efforts, both rapidly and accurately, in the intermediate and late phases of environmental cleanup. Improvement in sample processing and analysis leads to increased laboratory capacity to handle the analysis of a large number of samples following the intentional or unintentional release of a radiological/nuclear contaminant.

  10. Methods for validating chronometry of computerized tests.

    PubMed

    Salmon, Joshua P; Jones, Stephanie A H; Wright, Chris P; Butler, Beverly C; Klein, Raymond M; Eskes, Gail A

    2017-03-01

    Determining the speed at which a task is performed (i.e., reaction time) can be a valuable tool in both research and clinical assessments. However, standard computer hardware employed for measuring reaction times (e.g., computer monitor, keyboard, or mouse) can add nonrepresentative noise to the data, potentially compromising the accuracy of measurements and the conclusions drawn from the data. Therefore, an assessment of the accuracy and precision of measurement should be included along with the development of computerized tests and assessment batteries that rely on reaction times as the dependent variable. This manuscript outlines three methods for assessing the temporal accuracy of reaction time data (one employing external chronometry). Using example data collected from the Dalhousie Computerized Attention Battery (DalCAB) we discuss the detection, measurement, and correction of nonrepresentative noise in reaction time measurement. The details presented in this manuscript should act as a cautionary tale to any researchers or clinicians gathering reaction time data, but who have not yet considered methods for verifying the internal chronometry of the software and or hardware being used.

  11. Unexpected but Most Welcome: Mixed Methods for the Validation and Revision of the Participatory Evaluation Measurement Instrument

    ERIC Educational Resources Information Center

    Daigneault, Pierre-Marc; Jacob, Steve

    2014-01-01

    Although combining methods is nothing new, more contributions about why and how to mix methods for validation purposes are needed. This article presents a case of validating the inferences drawn from the Participatory Evaluation Measurement Instrument, an instrument that purports to measure stakeholder participation in evaluation. Although the…

  12. Unexpected but Most Welcome: Mixed Methods for the Validation and Revision of the Participatory Evaluation Measurement Instrument

    ERIC Educational Resources Information Center

    Daigneault, Pierre-Marc; Jacob, Steve

    2014-01-01

    Although combining methods is nothing new, more contributions about why and how to mix methods for validation purposes are needed. This article presents a case of validating the inferences drawn from the Participatory Evaluation Measurement Instrument, an instrument that purports to measure stakeholder participation in evaluation. Although the…

  13. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  14. Validation of analytic methods for biomarkers used in drug development.

    PubMed

    Chau, Cindy H; Rixe, Olivier; McLeod, Howard; Figg, William D

    2008-10-01

    The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and, in particular, assay validation become essential with the need to establish standardized guidelines for analytic methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics but are contingent on the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development.

  15. Validation of Analytical Methods for Biomarkers Employed in Drug Development

    PubMed Central

    Chau, Cindy H.; Rixe, Olivier; McLeod, Howard; Figg, William D.

    2008-01-01

    The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and in particular assay validation becomes essential with the need to establish standardized guidelines for analytical methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics, but is contingent upon the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development. PMID:18829475

  16. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  17. Validity Argument for Assessing L2 Pragmatics in Interaction Using Mixed Methods

    ERIC Educational Resources Information Center

    Youn, Soo Jung

    2015-01-01

    This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate…

  18. Validity Argument for Assessing L2 Pragmatics in Interaction Using Mixed Methods

    ERIC Educational Resources Information Center

    Youn, Soo Jung

    2015-01-01

    This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate…

  19. Development and Fit-for-Purpose Validation of a Soluble Human Programmed Death-1 Protein Assay.

    PubMed

    Ni, Yan G; Yuan, Xiling; Newitt, John A; Peterson, Jon E; Gleason, Carol R; Haulenbeek, Jonathan; Santockyte, Rasa; Lafont, Virginie; Marsilio, Frank; Neely, Robert J; DeSilva, Binodh; Piccoli, Steven P

    2015-07-01

    Programmed death-1 (PD-1) protein is a co-inhibitory receptor which negatively regulates immune cell activation and permits tumors to evade normal immune defense. Anti-PD-1 antibodies have been shown to restore immune cell activation and effector function-an exciting breakthrough in cancer immunotherapy. Recent reports have documented a soluble form of PD-1 (sPD-1) in the circulation of normal and disease state individuals. A clinical assay to quantify sPD-1 would contribute to the understanding of sPD-1-function and facilitate the development of anti-PD-1 drugs. Here, we report the development and validation of a sPD-1 protein assay. The assay validation followed the framework for full validation of a biotherapeutic pharmacokinetic assay. A purified recombinant human PD-1 protein was characterized extensively and was identified as the assay reference material which mimics the endogenous analyte in structure and function. The lower limit of quantitation (LLOQ) was determined to be 100 pg/mL, with a dynamic range spanning three logs to 10,000 pg/mL. The intra- and inter-assay imprecision were ≤15%, and the assay bias (percent deviation) was ≤10%. Potential matrix effects were investigated in sera from both normal healthy volunteers and selected cancer patients. Bulk-prepared frozen standards and pre-coated Streptavidin plates were used in the assay to ensure consistency in assay performance over time. This assay appears to specifically measure total sPD-1 protein since the human anti-PD-1 antibody, nivolumab, and the endogenous ligands of PD-1 protein, PDL-1 and PDL-2, do not interfere with the assay.

  20. The external validity of results derived from ecstasy users recruited using purposive sampling strategies.

    PubMed

    Topp, Libby; Barker, Bridget; Degenhardt, Louisa

    2004-01-07

    This study sought to compare the patterns and correlates of 'recent' and 'regular' ecstasy use estimated on the basis of two datasets generated in 2001 in New South Wales, Australia, from a probability and a non-probability sample. The first was the National Drug Strategy Household Survey (NDSHS), a multistage probability sample of the general population; and the second was the Illicit Drug Reporting System (IDRS) Party Drugs Module, for which regular ecstasy users were recruited using purposive sampling strategies. NDSHS recent ecstasy users (any use in the preceding 12 months) were compared on a range of demographic and drug use variables to NDSHS regular ecstasy users (at least monthly use in the preceding 12 months) and purposively sampled regular ecstasy users (at least monthly use in the preceding 6 months). The demographic characteristics of the three samples were consistent. Among all three, the mean age was approximately 25 years, and a majority (60%) of subjects were male, relatively well-educated, and currently employed or studying. Patterns of ecstasy use were similar among the three samples, although compared to recent users, regular users were likely to report more frequent use of ecstasy. All samples were characterised by extensive polydrug use, although the two samples of regular ecstasy users reported higher rates of other illicit drug use than the sample of recent users. The similarities between the demographic and drug use characteristics of the samples are striking, and suggest that, at least in NSW, purposive sampling that seeks to draw from a wide cross-section of users and to sample a relatively large number of individuals, can give rise to samples of ecstasy users that may be considered sufficiently representative to reasonably warrant the drawing of inferences relating to the entire population. These findings may partially offset concerns that purposive samples of ecstasy users are likely to remain a primary source of ecstasy

  1. Development and validation of a new fallout transport method using variable spectral winds. Doctoral thesis

    SciTech Connect

    Hopkins, A.T.

    1984-09-01

    The purpose of this research was to develop and validate a fallout prediction method using variable transport calculations. The new method uses National Meteorological Center (NMC) spectral coefficients to compute wind vectors along the space- and time-varying trajectories of falling particles. The method was validated by comparing computed and actual cloud trajectories from a Mount St. Helens volcanic eruption and a high dust cloud. In summary, this research demonstrated the feasibility of using spectral coefficients for fallout transport calculations, developed a two-step smearing model to treat variable winds, and showed that uncertainties in spectral winds do not contribute significantly to the error in computed dose rate.

  2. Reliability and validity of optoelectronic method for biophotonical measurements

    NASA Astrophysics Data System (ADS)

    Karpienko, Katarzyna; Wróbel, Maciej S.; UrniaŻ, Rafał

    2013-11-01

    Reliability and validity of measurements is of utmost importance when assessing measuring capability of instruments developed for research. In order to perform an experiment which is legitimate, used instruments must be both reliable and valid. Reliability estimates the degree of precision of measurement, the extent to which a measurement is internally consistent. Validity is the usefulness of an instrument to perform accurate measurements of quantities it was designed to measure. Statistical analysis for reliability and validity control of low-coherence interferometry method for refractive index measurements of biological fluids is presented. The low-coherence interferometer is sensitive to optical path difference between interfering beams. This difference depends on the refractive index of measured material. To assess the validity and reliability of proposed method for blood measurements, the statistical analysis of the method was performed on several substances with known refractive indices. Analysis of low-coherence interferograms considered the mean distances between fringes. Performed statistical analysis for validity and reliability consisted of Grubb's test for outliers, Shapiro-Wilk test for normal distribution, T-Student test, standard deviation, coefficient of determination and r-Pearson correlation. Overall the tests proved high statistical significance of measurement method with confidence level < 0.0001 of measurement method.

  3. International Harmonization and Cooperation in the Validation of Alternative Methods.

    PubMed

    Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie

    The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.

  4. [Data validation methods and discussion on Chinese materia medica resource survey].

    PubMed

    Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing

    2013-07-01

    From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.

  5. Terrestrial gastropods (Helix spp) as sentinels of primary DNA damage for biomonitoring purposes: a validation study.

    PubMed

    Angeletti, Dario; Sebbio, Claudia; Carere, Claudio; Cimmaruta, Roberta; Nascetti, Giuseppe; Pepe, Gaetano; Mosesso, Pasquale

    2013-04-01

    We validated the alkaline comet assay in two species of land snail (Helix aspersa and Helix vermiculata) to test their suitability as sentinels for primary DNA damage in polluted environments. The study was conducted under the framework of a biomonitoring program for a power station in Central Italy that had recently been converted from oil to coal-fired plant. After optimizing test conditions, the comet assay was used to measure the % Tail DNA induced by in vitro exposure of hemocytes to different concentrations of a reactive oxygen species (H2 O2 ). The treatment induced significant increases in this parameter with a concentration effect, indicating the effectiveness of the assay in snail hemocytes. After evaluating possible differences between the two species, we sampled them in three field sites at different distances from the power station, and in two reference sites assumed to have low or no levels of pollution. No species differences emerged. Percent Tail DNA values in snails from the sites near the power station were higher than those from control sites. An inverse correlation emerged between % Tail DNA and distance from the power station, suggesting that the primary DNA damage decreased as distance increased away from the pollution source. Detection of a gradient of heavy metal concentration in snail tissues suggests that these pollutants are a potential cause of the observed pattern. The comet assay appears to be a suitable assay and Helix spp. populations suitable sentinels to detect the genotoxic impact of pollutants. Copyright © 2013 Wiley Periodicals, Inc.

  6. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  7. Development and validation of a reading-related assessment battery in Malay for the purpose of dyslexia assessment.

    PubMed

    Lee, Lay Wah

    2008-06-01

    Malay is an alphabetic language with transparent orthography. A Malay reading-related assessment battery which was conceptualised based on the International Dyslexia Association definition of dyslexia was developed and validated for the purpose of dyslexia assessment. The battery consisted of ten tests: Letter Naming, Word Reading, Non-word Reading, Spelling, Passage Reading, Reading Comprehension, Listening Comprehension, Elision, Rapid Letter Naming and Digit Span. Content validity was established by expert judgment. Concurrent validity was obtained using the schools' language tests as criterion. Evidence of predictive and construct validity was obtained through regression analyses and factor analyses. Phonological awareness was the most significant predictor of word-level literacy skills in Malay, with rapid naming making independent secondary contributions. Decoding and listening comprehension made separate contributions to reading comprehension, with decoding as the more prominent predictor. Factor analysis revealed four factors: phonological decoding, phonological naming, comprehension and verbal short-term memory. In conclusion, despite differences in orthography, there are striking similarities in the theoretical constructs of reading-related tasks in Malay and in English.

  8. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    ERIC Educational Resources Information Center

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  9. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    ERIC Educational Resources Information Center

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  10. Beyond Correctness: Development and Validation of Concept-Based Categorical Scoring Rubrics for Diagnostic Purposes

    ERIC Educational Resources Information Center

    Arieli-Attali, Meirav; Liu, Ying

    2016-01-01

    Diagnostic assessment approaches intend to provide fine-grained reports of what students know and can do, focusing on their areas of strengths and weaknesses. However, current application of such diagnostic approaches is limited by the scoring method for item responses; important diagnostic information, such as type of errors and strategy use is…

  11. Beyond Correctness: Development and Validation of Concept-Based Categorical Scoring Rubrics for Diagnostic Purposes

    ERIC Educational Resources Information Center

    Arieli-Attali, Meirav; Liu, Ying

    2016-01-01

    Diagnostic assessment approaches intend to provide fine-grained reports of what students know and can do, focusing on their areas of strengths and weaknesses. However, current application of such diagnostic approaches is limited by the scoring method for item responses; important diagnostic information, such as type of errors and strategy use is…

  12. Adapting CEF-Descriptors for Rating Purposes: Validation by a Combined Rater Training and Scale Revision Approach

    ERIC Educational Resources Information Center

    Harsch, Claudia; Martin, Guido

    2012-01-01

    We explore how a local rating scale can be based on the Common European Framework CEF-proficiency scales. As part of the scale validation (Alderson, 1991; Lumley, 2002), we examine which adaptations are needed to turn CEF-proficiency descriptors into a rating scale for a local context, and to establish a practicable method to revise the initial…

  13. Quantitative assessment of gene expression network module-validation methods.

    PubMed

    Li, Bing; Zhang, Yingying; Yu, Yanan; Wang, Pengqian; Wang, Yongcheng; Wang, Zhong; Wang, Yongyan

    2015-10-16

    Validation of pluripotent modules in diverse networks holds enormous potential for systems biology and network pharmacology. An arising challenge is how to assess the accuracy of discovering all potential modules from multi-omic networks and validating their architectural characteristics based on innovative computational methods beyond function enrichment and biological validation. To display the framework progress in this domain, we systematically divided the existing Computational Validation Approaches based on Modular Architecture (CVAMA) into topology-based approaches (TBA) and statistics-based approaches (SBA). We compared the available module validation methods based on 11 gene expression datasets, and partially consistent results in the form of homogeneous models were obtained with each individual approach, whereas discrepant contradictory results were found between TBA and SBA. The TBA of the Zsummary value had a higher Validation Success Ratio (VSR) (51%) and a higher Fluctuation Ratio (FR) (80.92%), whereas the SBA of the approximately unbiased (AU) p-value had a lower VSR (12.3%) and a lower FR (45.84%). The Gray area simulated study revealed a consistent result for these two models and indicated a lower Variation Ratio (VR) (8.10%) of TBA at 6 simulated levels. Despite facing many novel challenges and evidence limitations, CVAMA may offer novel insights into modular networks.

  14. Quantitative assessment of gene expression network module-validation methods

    PubMed Central

    Li, Bing; Zhang, Yingying; Yu, Yanan; Wang, Pengqian; Wang, Yongcheng; Wang, Zhong; Wang, Yongyan

    2015-01-01

    Validation of pluripotent modules in diverse networks holds enormous potential for systems biology and network pharmacology. An arising challenge is how to assess the accuracy of discovering all potential modules from multi-omic networks and validating their architectural characteristics based on innovative computational methods beyond function enrichment and biological validation. To display the framework progress in this domain, we systematically divided the existing Computational Validation Approaches based on Modular Architecture (CVAMA) into topology-based approaches (TBA) and statistics-based approaches (SBA). We compared the available module validation methods based on 11 gene expression datasets, and partially consistent results in the form of homogeneous models were obtained with each individual approach, whereas discrepant contradictory results were found between TBA and SBA. The TBA of the Zsummary value had a higher Validation Success Ratio (VSR) (51%) and a higher Fluctuation Ratio (FR) (80.92%), whereas the SBA of the approximately unbiased (AU) p-value had a lower VSR (12.3%) and a lower FR (45.84%). The Gray area simulated study revealed a consistent result for these two models and indicated a lower Variation Ratio (VR) (8.10%) of TBA at 6 simulated levels. Despite facing many novel challenges and evidence limitations, CVAMA may offer novel insights into modular networks. PMID:26470848

  15. Visualization of vasculature with convolution surfaces: method, validation and evaluation.

    PubMed

    Oeltze, Steffen; Preim, Bernhard

    2005-04-01

    We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons.

  16. Update on validation of microbiological methods by AOAC International.

    PubMed

    Andrews, W H

    1994-01-01

    An update is presented of the microbiological methods validated by AOAC since 1983. A sequential listing of the microbiological methods adopted first action between 1939 and 1993 gives the permanent new numbers of each, as introduced in Official Methods of Analysis, 15th Edition. Consideration is given to the expanded applicability of approved methods with respect to food matrix; the predominance of methods for the detection, identification, and serological testing of Salmonella spp.; and the procedures for coliforms and Escherichia coli, which were most frequently approved between 1973 and 1993. The substantial increase in validation of test kits is discussed, and categories of methods for the modification of test kits already approved are defined.

  17. Validation of a previous day recall for measuring the location and purpose of active and sedentary behaviors compared to direct observation

    PubMed Central

    2014-01-01

    Purpose Gathering contextual information (i.e., location and purpose) about active and sedentary behaviors is an advantage of self-report tools such as previous day recalls (PDR). However, the validity of PDR’s for measuring context has not been empirically tested. The purpose of this paper was to compare PDR estimates of location and purpose to direct observation (DO). Methods Fifteen adult (18–75 y) and 15 adolescent (12–17 y) participants were directly observed during at least one segment of the day (i.e., morning, afternoon or evening). Participants completed their normal daily routine while trained observers recorded the location (i.e., home, community, work/school), purpose (e.g., leisure, transportation) and whether the behavior was sedentary or active. The day following the observation, participants completed an unannounced PDR. Estimates of time in each context were compared between PDR and DO. Intra-class correlations (ICC), percent agreement and Kappa statistics were calculated. Results For adults, percent agreement was 85% or greater for each location and ICC values ranged from 0.71 to 0.96. The PDR-reported purpose of adults’ behaviors were highly correlated with DO for household activities and work (ICCs of 0.84 and 0.88, respectively). Transportation was not significantly correlated with DO (ICC = -0.08). For adolescents, reported classification of activity location was 80.8% or greater. The ICCs for purpose of adolescents’ behaviors ranged from 0.46 to 0.78. Participants were most accurate in classifying the location and purpose of the behaviors in which they spent the most time. Conclusions This study suggests that adults and adolescents can accurately report where and why they spend time in behaviors using a PDR. This information on behavioral context is essential for translating the evidence for specific behavior-disease associations to health interventions and public policy. PMID:24490619

  18. Recommendations for Use and Fit-for-Purpose Validation of Biomarker Multiplex Ligand Binding Assays in Drug Development.

    PubMed

    Jani, Darshana; Allinson, John; Berisha, Flora; Cowan, Kyra J; Devanarayan, Viswanath; Gleason, Carol; Jeromin, Andreas; Keller, Steve; Khan, Masood U; Nowatzke, Bill; Rhyne, Paul; Stephen, Laurie

    2016-01-01

    Multiplex ligand binding assays (LBAs) are increasingly being used to support many stages of drug development. The complexity of multiplex assays creates many unique challenges in comparison to single-plexed assays leading to various adjustments for validation and potentially during sample analysis to accommodate all of the analytes being measured. This often requires a compromise in decision making with respect to choosing final assay conditions and acceptance criteria of some key assay parameters, depending on the intended use of the assay. The critical parameters that are impacted due to the added challenges associated with multiplexing include the minimum required dilution (MRD), quality control samples that span the range of all analytes being measured, quantitative ranges which can be compromised for certain targets, achieving parallelism for all analytes of interest, cross-talk across assays, freeze-thaw stability across analytes, among many others. Thus, these challenges also increase the complexity of validating the performance of the assay for its intended use. This paper describes the challenges encountered with multiplex LBAs, discusses the underlying causes, and provides solutions to help overcome these challenges. Finally, we provide recommendations on how to perform a fit-for-purpose-based validation, emphasizing issues that are unique to multiplex kit assays.

  19. Validating the strategies analysis diagram: assessing the reliability and validity of a formative method.

    PubMed

    Cornelissen, Miranda; McClure, Roderick; Salmon, Paul M; Stanton, Neville A

    2014-11-01

    The Strategies Analysis Diagram (SAD) is a recently developed method to model the range of possible strategies available for activities in complex sociotechnical systems. Previous applications of the new method have shown that it can effectively identify a comprehensive range of strategies available to humans performing activity within a particular system. A recurring criticism of Ergonomics methods is however, that substantive evidence regarding their performance is lacking. For a method to be widely used by other practitioners such evaluations are necessary. This article presents an evaluation of criterion-referenced validity and test-retest reliability of the SAD method when used by novice analysts. The findings show that individual analyst performance was average. However, pooling the individual analyst outputs into a group model increased the reliability and validity of the method. It is concluded that the SAD method's reliability and validity can be assured through the use of a structured process in which analysts first construct an individual model, followed by either another analyst pooling the individual results or a group process pooling individual models into an agreed group model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. Validation of three-dimensional Euler methods for vibrating cascade aerodynamics

    SciTech Connect

    Gerolymos, G.A.; Vallet, I.

    1996-10-01

    The purpose of this work is to validate a time-nonlinear three-dimensional Euler solver for vibrating cascades aerodynamics by comparison with available theoretical semi-analytical results from flat-plate cascades. First the method is validated with respect to the purely two-dimensional theory of Verdon (for supersonic flow) by computing two-dimensional vibration (spanwise constant) in linear three-dimensional cascades. Then the method is validated by comparison with the theoretical results of Namba and the computational results of He and Denton, for subsonic flow in a linear three-dimensional cascade with three-dimensional vibratory mode. Finally the method is compared with results of Chi from two subsonic rotating annular cascades of helicoiedal flat plates. Quite satisfactory agreement is obtained for all the cases studied. A first code-to-code comparison is also presented.

  1. The Relationship between Method and Validity in Social Science Research.

    ERIC Educational Resources Information Center

    MacKinnon, David; And Others

    An endless debate in social science research focuses on whether or not there is a philosophical basis for justifying the application of scientific methods to social inquiry. A review of the philosophies of various scholars in the field indicates that there is no single procedure for arriving at a valid statement in a scientific inquiry. Natural…

  2. ePortfolios: The Method of Choice for Validation

    ERIC Educational Resources Information Center

    Scott, Ken; Kim, Jichul

    2015-01-01

    Community colleges have long been institutions of higher education in the arenas of technical education and training, as well as preparing students for transfer to universities. While students are engaged in their student learning outcomes, projects, research, and community service, how have these students validated their work? One method of…

  3. ePortfolios: The Method of Choice for Validation

    ERIC Educational Resources Information Center

    Scott, Ken; Kim, Jichul

    2015-01-01

    Community colleges have long been institutions of higher education in the arenas of technical education and training, as well as preparing students for transfer to universities. While students are engaged in their student learning outcomes, projects, research, and community service, how have these students validated their work? One method of…

  4. Youden test application in robustness assays during method validation.

    PubMed

    Karageorgou, Eftichia; Samanidou, Victoria

    2014-08-01

    Analytical method validation is a vital step following method development for ensuring reliable and accurate method performance. Among examined figures of merit, robustness/ruggedness study allows us to test performance characteristics of the analytical process when operating conditions are altered either deliberately or not. This study yields useful information, being a fundamental part of method validation. Since many experiments are required, this step is high demanding in time and consumables. In order to avoid the difficult task of performing too many experiments the Youden test which makes use of fractional factorial designs and has been proved to be a very effective approach. The main advantage of Youden test is the fact that it keeps the required time and effort to a minimum, since only a limited number of determinations have to be made, using combinations of the chosen investigated factors. Typical applications of this robustness test found in literature covering a wide variety of sample matrices are briefly discussed in this review.

  5. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  6. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    PubMed Central

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  7. Validation of cleaning method for various parts fabricated at a Beryllium facility

    SciTech Connect

    Davis, Cynthia M.

    2015-12-15

    This study evaluated and documented a cleaning process that is used to clean parts that are fabricated at a beryllium facility at Los Alamos National Laboratory. The purpose of evaluating this cleaning process was to validate and approve it for future use to assure beryllium surface levels are below the Department of Energy’s release limits without the need to sample all parts leaving the facility. Inhaling or coming in contact with beryllium can cause an immune response that can result in an individual becoming sensitized to beryllium, which can then lead to a disease of the lungs called chronic beryllium disease, and possibly lung cancer. Thirty aluminum and thirty stainless steel parts were fabricated on a lathe in the beryllium facility, as well as thirty-two beryllium parts, for the purpose of testing a parts cleaning method that involved the use of ultrasonic cleaners. A cleaning method was created, documented, validated, and approved, to reduce beryllium contamination.

  8. How to develop and validate a total organic carbon method for cleaning applications.

    PubMed

    Clark, K

    2001-01-01

    Good Manufacturing Practices require that the cleaning of drug manufacturing equipment be validated. Common analytical techniques used in the validation process include HPLC, UV/Vis, and Total Organic Carbon (TOC). HPLC and UV/Vis are classified as specific methods that identify and measure appropriate active substances. TOC is classified as a non-specific method and can detect all carbon-containing compounds, including active substances, excipients, and cleaning agents. The disadvantage of specific methods is that a new procedure must be developed for every active drug substance that is manufactured. This development process can be very time consuming and tedious. In contrast, one TOC method can potentially be used for all products. A TOC method is sensitive to the ppb range and is less time consuming than HPLC or UV/Vis. USP TOC methods are standard for Water for Injection and Purified Water, and simple modifications of these methods can be used for cleaning validation. The purpose of this study is to demonstrate how to develop and validate a TOC method for cleaning applications. Performance parameters evaluated in this study include linearity, MDL, LOQ, accuracy, precision, and swab recovery.

  9. Cleaning validation of ofloxacin on pharmaceutical manufacturing equipment and validation of desired HPLC method.

    PubMed

    Arayne, M Saeed; Sultana, Najma; Sajid, S Shahnawaz; Ali, S Shahid

    2008-01-01

    Inadequate cleaning of a pharmaceutical manufacturing plant or inadequate purging of the individual pieces of equipment used in multi-product manufacturing or equipment not dedicated to individual products may lead to contamination of the next batch of pharmaceutics manufactured using the same equipment. Challenges for cleaning validation are encountered especially when developing sensitive analytical methods capable of detecting traces of active pharmaceutical ingredients that are likely to remain on the surface of the pharmaceutical equipment after cleaning. A method's inability to detect some residuals could mean that either the method is not sensitive enough to the residue in question or the sampling procedure is inadequate. A sensitive and reproducible reversed-phase, high-performance liquid chromatographic method was developed for the determination of ofloxacin in swab samples. The method for determining ofloxacin residues on manufacturing equipment surfaces was validated in regard to precision, linearity, accuracy, specificity, limit of quantification, and percent recovery from the equipment surface, as well as the stability of a potential contaminant in a cleaning validation process. The active compound was selectively quantified in a sample matrix and swab material in amounts as low as 0.55 ng/mL. The swabbing procedure using cotton swabs was validated. A mean recovery from stainless steel plate of close to 85% was obtained. Chromatography was carried out on a pre-packed Merck (Dermstadt, Germany) Lichrospher model 100 Rp-18 (5.0 microm, 250 mm X 4.0 mm) column using a mixture of sodium lauryl sulfate (0.024% aqueous solution), acetonitrile, and glacial acetic acid (500:480:20,v/v) as the mobile phase at a flow rate of 1.5 mL/min with a column temperature of 35 degrees C and 294 nm detection. The assay was linear over the concentration range of 2 ng/mL to 2000 ng/mL (R approximately 0.99998). The method was validated for accuracy and precision. The

  10. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  11. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  12. LC-MS quantification of protein drugs: validating protein LC-MS methods with predigestion immunocapture.

    PubMed

    Duggan, Jeffrey; Ren, Bailuo; Mao, Yan; Chen, Lin-Zhi; Philip, Elsy

    2016-09-01

    A refinement of protein LC-MS bioanalysis is to use predigestion immunoaffinity capture to extract the drug from matrix prior to digestion. Because of their increased sensitivity, such hybrid assays have been successfully validated and applied to a number of clinical studies; however, they can also be subject to potential interferences from antidrug antibodies, circulating ligands or other matrix components specific to patient populations and/or dosed subjects. The purpose of this paper is to describe validation experiments that measure immunocapture efficiency, digestion efficiency, matrix effect and selectivity/specificity that can be used during method optimization and validation to test the resistance of the method to these potential interferences. The designs and benefits of these experiments are discussed in this report using an actual assay case study.

  13. Flexibility and applicability of β-expectation tolerance interval approach to assess the fitness of purpose of pharmaceutical analytical methods.

    PubMed

    Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E

    2012-12-01

    An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures.

  14. Estimation of Glomerular Podocyte Number: A Selection of Valid Methods

    PubMed Central

    Bertram, John F.; Nicholas, Susanne B.; White, Kathryn

    2013-01-01

    The podocyte depletion hypothesis has emerged as an important unifying concept in glomerular pathology. The estimation of podocyte number is therefore often a critical component of studies of progressive renal diseases. Despite this, there is little uniformity in the biomedical literature with regard to the methods used to estimate this important parameter. Here we review a selection of valid methods for estimating podocyte number: exhaustive enumeration method, Weibel and Gomez method, disector/Cavalieri combination, disector/fractionator combination, and thick-and-thin section method. We propose the use of the disector/fractionator method for studies in which controlled sectioning of tissue is feasible, reserving the Weibel and Gomez method for studies based on archival or routine pathology material. PMID:23833256

  15. Design of experiment and data analysis by JMP (SAS institute) in analytical method validation.

    PubMed

    Ye, C; Liu, J; Ren, F; Okafo, N

    2000-08-15

    Validation of an analytical method through a series of experiments demonstrates that the method is suitable for its intended purpose. Due to multi-parameters to be examined and a large number of experiments involved in validation, it is important to design the experiments scientifically so that appropriate validation parameters can be examined simultaneously to provide a sound, overall knowledge of the capabilities of the analytical method. A statistical method through design of experiment (DOE) was applied to the validation of a HPLC analytical method for the quantitation of a small molecule in drug product in terms of intermediate precision and robustness study. The data were analyzed in JMP (SAS institute) software using analyses of variance method. Confidence intervals for outcomes and control limits for individual parameters were determined. It was demonstrated that the experimental design and statistical analysis used in this study provided an efficient and systematic approach to evaluating intermediate precision and robustness for a HPLC analytical method for small molecule quantitation.

  16. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods.

  17. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  18. Validity of body composition methods across ethnic population groups.

    PubMed

    Deurenberg, P; Deurenberg-Yap, M

    2003-10-01

    Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.

  19. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  20. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset.

    PubMed

    Neto, Joana P; Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R

    2016-08-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo "paired-recordings" such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. Copyright © 2016 the American Physiological Society.

  1. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  2. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset

    PubMed Central

    Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R.

    2016-01-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo “paired-recordings” such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. PMID:27306671

  3. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  4. Basic overview of method validation in the clinical virology laboratory.

    PubMed

    Newman, Howard; Maritz, Jean

    2017-08-30

    Diagnostic virology laboratories are an essential part of the health system and are often relied upon to provide information to clinicians that will inform clinical decision making. It is therefore imperative that diagnostic results produced in the laboratory are reliable. One way of ensuring quality results is by ensuring that all tests are either validated (for tests developed in-house) or verified (for commercial assays that are FDA-approved or CE-labeled). In the diagnostic virology laboratory, these processes can be complex as both qualitative and quantitative measurements for serological and molecular tests are routinely offered. While there are numerous guidelines governing quality assurance in the virology laboratory, all accrediting agencies would insist on tests being validated or verified prior to implementation without providing explicit guidance to the process. As there is no universal guideline on the optimal way to perform validation/verification experiments, this review will provide a basic overview of method validation/verification, specific for clinical virology laboratories, and includes explanation of statistical analysis and acceptance/rejection criteria. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Knowledge Transmission versus Social Transformation: A Critical Analysis of Purpose in Elementary Social Studies Methods Textbooks

    ERIC Educational Resources Information Center

    Butler, Brandon M.; Suh, Yonghee; Scott, Wendy

    2015-01-01

    In this article, the authors investigate the extent to which 9 elementary social studies methods textbooks present the purpose of teaching and learning social studies. Using Stanley's three perspectives of teaching social studies for knowledge transmission, method of intelligence, and social transformation; we analyze how these texts prepare…

  6. Knowledge Transmission versus Social Transformation: A Critical Analysis of Purpose in Elementary Social Studies Methods Textbooks

    ERIC Educational Resources Information Center

    Butler, Brandon M.; Suh, Yonghee; Scott, Wendy

    2015-01-01

    In this article, the authors investigate the extent to which 9 elementary social studies methods textbooks present the purpose of teaching and learning social studies. Using Stanley's three perspectives of teaching social studies for knowledge transmission, method of intelligence, and social transformation; we analyze how these texts prepare…

  7. Calculation spreadsheet for uncertainty estimation of measurement results in gamma-ray spectrometry and its validation for quality assurance purpose.

    PubMed

    Ceccatelli, Alessia; Dybdal, Ashild; Fajgelj, Ales; Pitois, Aurelien

    2017-03-03

    An Excel calculation spreadsheet has been developed to estimate the uncertainty of measurement results in γ-ray spectrometry. It considers all relevant uncertainty components and calculates the combined standard uncertainty of the measurement result. The calculation spreadsheet has been validated using two independent open access software and is available for download free of charge at: https://nucleus.iaea.org/rpst/ReferenceProducts/Analytical_Methods/index.htm. It provides a simple and easy-to-use template for estimating the uncertainty of γ-ray spectrometry measurement results and supports the radioanalytical laboratories seeking accreditation for their measurements using γ-ray spectrometry.

  8. Survey and assessment of conventional software verification and validation methods

    SciTech Connect

    Miller, L.A.; Groundwater, E.; Mirsky, S.M. )

    1993-04-01

    By means of a literature survey, a comprehensive set of methods was identified for the verification and validation of conventional software. The 134 methods so identified were classified according to their appropriateness for various phases of a developmental lifecycle -- requirements, design, and implementation; the last category was subdivided into two, static testing and dynamic testing methods. The methods were then characterized in terms of eight rating factors, four concerning ease-of-use of the methods and four concerning the methods' power to detect defects. Based on these factors, two measurements were developed to permit quantitative comparisons among methods, a Cost-Benefit metric and an Effectiveness Metric. The Effectiveness Metric was further refined to provide three different estimates for each method, depending on three classes of needed stringency of V V (determined by ratings of a system's complexity and required-integrity). Methods were then rank-ordered for each of the three classes in terms of their overall cost-benefits and effectiveness. The applicability was then assessed of each method for the four identified components of knowledge-based and expert systems, as well as the system as a whole.

  9. Survey and assessment of conventional software verification and validation methods

    SciTech Connect

    Miller, L.A.; Groundwater, E.; Mirsky, S.M.

    1993-04-01

    By means of a literature survey, a comprehensive set of methods was identified for the verification and validation of conventional software. The 134 methods so identified were classified according to their appropriateness for various phases of a developmental lifecycle -- requirements, design, and implementation; the last category was subdivided into two, static testing and dynamic testing methods. The methods were then characterized in terms of eight rating factors, four concerning ease-of-use of the methods and four concerning the methods` power to detect defects. Based on these factors, two measurements were developed to permit quantitative comparisons among methods, a Cost-Benefit metric and an Effectiveness Metric. The Effectiveness Metric was further refined to provide three different estimates for each method, depending on three classes of needed stringency of V&V (determined by ratings of a system`s complexity and required-integrity). Methods were then rank-ordered for each of the three classes in terms of their overall cost-benefits and effectiveness. The applicability was then assessed of each method for the four identified components of knowledge-based and expert systems, as well as the system as a whole.

  10. Validation method training: nurses' experiences and ratings of work climate.

    PubMed

    Söderlund, Mona; Norberg, Astrid; Hansebo, Görel

    2014-03-01

    Training nursing staff in communication skills can impact on the quality of care for residents with dementia and contributes to nurses' job satisfaction. Changing attitudes and practices takes time and energy and can affect the entire nursing staff, not just the nurses directly involved in a training programme. Therefore, it seems important to study nurses' experiences of a training programme and any influence of the programme on work climate among the entire nursing staff. To explore nurses' experiences of a 1-year validation method training programme conducted in a nursing home for residents with dementia and to describe ratings of work climate before and after the programme. A mixed-methods approach. Twelve nurses participated in the training and were interviewed afterwards. These individual interviews were tape-recorded and transcribed, then analysed using qualitative content analysis. The Creative Climate Questionnaire was administered before (n = 53) and after (n = 56) the programme to the entire nursing staff in the participating nursing home wards and analysed with descriptive statistics. Analysis of the interviews resulted in four categories: being under extra strain, sharing experiences, improving confidence in care situations and feeling uncertain about continuing the validation method. The results of the questionnaire on work climate showed higher mean values in the assessment after the programme had ended. The training strengthened the participating nurses in caring for residents with dementia, but posed an extra strain on them. These nurses also described an extra strain on the entire nursing staff that was not reflected in the results from the questionnaire. The work climate at the nursing home wards might have made it easier to conduct this extensive training programme. Training in the validation method could develop nurses' communication skills and improve their handling of complex care situations. © 2013 Blackwell Publishing Ltd.

  11. Method validation strategies involved in non-targeted metabolomics.

    PubMed

    Naz, Shama; Vallejo, Maria; García, Antonia; Barbas, Coral

    2014-08-01

    Non-targeted metabolomics is the hypothesis generating, global unbiased analysis of all the small-molecule metabolites present within a biological system, under a given set of conditions. It includes several common steps such as selection of biological samples, sample pre-treatment, analytical conditions set-up, acquiring data, data analysis by chemometrics, database search and biological interpretation. Non-targeted metabolomics offers the potential for a holistic approach in the area of biomedical research in order to improve disease diagnosis and to understand its pathological mechanisms. Various analytical methods have been developed based on nuclear magnetic resonance spectroscopy (NMR) and mass spectrometry (MS) coupled with different separation techniques. The key points in any analytical method development are the validation of every step to get a reliable and reproducible result and non-targeted metabolomics is not beyond this criteria, although analytical challenges are completely new and different to target methods. This review paper will describe the available validation strategies that are being used and as well will recommend some steps to consider during a non-targeted metabolomics analytical method development. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Total organic carbon method for aspirin cleaning validation.

    PubMed

    Holmes, A J; Vanderwielen, A J

    1997-01-01

    Cleaning validation is the process of assuring that cleaning procedures effectively remove the residue from manufacturing equipment/facilities below a predetermined level. This is necessary to assure the quality of future products using the equipment, to prevent cross-contamination, and as a World Health Organization Good Manufacturing Practices requirement. We have applied the Total Organic Carbon (TOC) analysis method to a number of pharmaceutical products. In this article we discuss the TOC method that we developed for measuring residual aspirin on aluminum, stainless steel, painted carbon steel, and plexiglass. These are all surfaces that are commonly found as part of pharmaceutical production equipment. The method offers low detection capability (parts per million levels) and rapid sample analysis time. The recovery values ranged from 25% for aluminum to about 75% for plexiglass with a precision of 13% or less. The results for the plexiglass tended to vary with the age of the surface making the determination of an accurate recovery value difficult for this type of surface. We found that the TOC method is applicable for determining residual aspirin on pharmaceutical surfaces and will be useful for cleaning validation.

  13. Validation of an Impedance Education Method in Flow

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Parrott, Tony L.

    2004-01-01

    This paper reports results of a research effort to validate a method for educing the normal incidence impedance of a locally reacting liner, located in a grazing incidence, nonprogressive acoustic wave environment with flow. The results presented in this paper test the ability of the method to reproduce the measured normal incidence impedance of a solid steel plate and two soft test liners in a uniform flow. The test liners are known to be locally react- ing and exhibit no measurable amplitude-dependent impedance nonlinearities or flow effects. Baseline impedance spectra for these liners were therefore established from measurements in a conventional normal incidence impedance tube. A key feature of the method is the expansion of the unknown impedance function as a piecewise continuous polynomial with undetermined coefficients. Stewart's adaptation of the Davidon-Fletcher-Powell optimization algorithm is used to educe the normal incidence impedance at each Mach number by optimizing an objective function. The method is shown to reproduce the measured normal incidence impedance spectrum for each of the test liners, thus validating its usefulness for determining the normal incidence impedance of test liners for a broad range of source frequencies and flow Mach numbers. Nomenclature

  14. Method validation for preparing urine samples for downstream proteomic and metabolomic applications.

    PubMed

    Ammerlaan, Wim; Trezzi, Jean-Pierre; Mathay, Conny; Hiller, Karsten; Betsou, Fay

    2014-10-01

    Formal validation of methods for biospecimen processing in the context of accreditation in laboratories and biobanks is lacking. A protocol for processing of a biospecimen (urine) was validated for fitness-for-purpose in terms of key downstream endpoints. Urine processing was optimized for centrifugation conditions on the basis of microparticle counts at room temperature (RT) and at 4°C. The optimal protocol was validated for performance (microparticle counts), and for reproducibility and robustness for centrifugation temperature (4°C vs. RT) and brake speed (soft, medium, hard). Acceptance criteria were based on microparticle counts, cystatin C and creatinine concentrations, and the metabolomic profile. The optimal protocol was a 20-min, 12,000 g centrifugation at 4°C, and was validated for urine collection in terms of microparticle counts. All reproducibility acceptance criteria were met. The protocol was robust for centrifugation at 4°C versus RT for all parameters. The protocol was considered robust overall in terms of brake speeds, although a hard brake gave significantly fewer microparticles than a soft brake. We validated a urine processing method suitable for downstream proteomic and metabolomic applications. Temperature and brake speed can influence analytic results, with 4°C and high brake speed considered optimal. Laboratories and biobanks should ensure these conditions are systematically recorded in the scope of accreditation.

  15. Methodology for the validation of analytical methods involved in uniformity of dosage units tests.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2013-01-14

    Validation of analytical methods is required prior to their routine use. In addition, the current implementation of the Quality by Design (QbD) framework in the pharmaceutical industries aims at improving the quality of the end products starting from its early design stage. However, no regulatory guideline or none of the published methodologies to assess method validation propose decision methodologies that effectively take into account the final purpose of developed analytical methods. In this work a solution is proposed for the specific case of validating analytical methods involved in the assessment of the content uniformity or uniformity of dosage units of a batch of pharmaceutical drug products as proposed in the European or US pharmacopoeias. This methodology uses statistical tolerance intervals as decision tools. Moreover it adequately defines the Analytical Target Profile of analytical methods in order to obtain analytical methods that allow to make correct decisions about Content uniformity or uniformity of dosage units with high probability. The applicability of the proposed methodology is further illustrated using an HPLC-UV assay as well as a near infra-red spectrophotometric method.

  16. Characterization and validation of a Portuguese natural reference soil to be used as substrate for ecotoxicological purposes.

    PubMed

    Caetano, A L; Gonçalves, F; Sousa, J P; Cachada, A; Pereira, E; Duarte, A C; Ferreira da Silva, E; Pereira, R

    2012-03-01

    This study describes the first attempt to validate a Portuguese natural soil (PTRS1) to be used as reference soil for ecotoxicological purposes, aimed to both: (i) obtain ecotoxicological data for the derivation of Soil Screening Values (SSVs) with regional relevance, acting as a substrate to be spiked with ranges of concentrations of the chemicals under evaluation and (ii) act as control and as substrate for the dilution of contaminated soils in ecotoxicological assays performed to evaluate the ecotoxicity of contaminated soils, in tier 2 of risk assessment frameworks, applied to contaminated lands. The PTRS1 is a cambisol from a granitic area integrated in the Central Iberian Zone. After chemical characterization of the soil in terms of pseudo-total metals, PAHs, PCBs and pesticide contents, it was possible to perceive that some metals (Ba, Be, Co, Cr and V) surpass the Dutch Target Values (Dtvs) corrected for the percentage of organic matter and clay of the PTRS1. Nevertheless, these metals displayed total concentrations below the background total concentrations described for Portuguese soils in general. The same was observed for aldrin, endosulfan I, endosulfan II, heptachlor epoxide, and heptachlor; however the Dtvs corrected become negligible. The performance of invertebrate and plant species, commonly used in standard ecotoxicological assays, was not compromised by both soil properties and soil metal contents. The results obtained suggest that the PTRS1 can be used as a natural reference soil in ecotoxicological assays carried out under the scope of ecological risk assessment.

  17. Determination of Al in cake mix: Method validation and estimation of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Andrade, G.; Rocha, O.; Junqueira, R.

    2016-07-01

    An analytical method for the determination of Al in cake mix was developed. Acceptable values were obtained for the following parameters: linearity, detection limit - LOD (5.00 mg-kg-1) quantification limit - LOQ (12.5 mg-kg-1), the recovery assays values (between 91 and 102%), the relative standard deviation under repeatability and within-reproducibility conditions (<20.0%) and measurement uncertainty tests (<10.0%) The results of the validation process showed that the proposed method is fitness for purpose.

  18. Validation of a hybrid life-cycle inventory analysis method.

    PubMed

    Crawford, Robert H

    2008-08-01

    The life-cycle inventory analysis step of a life-cycle assessment (LCA) may currently suffer from several limitations, mainly concerned with the use of incomplete and unreliable data sources and methods of assessment. Many past LCA studies have used traditional inventory analysis methods, namely process analysis and input-output analysis. More recently, hybrid inventory analysis methods have been developed, combining these two traditional methods in an attempt to minimise their limitations. In light of recent improvements, these hybrid methods need to be compared and validated, as these too have been considered to have several limitations. This paper evaluates a recently developed hybrid inventory analysis method which aims to improve the limitations of previous methods. It was found that the truncation associated with process analysis can be up to 87%, reflecting the considerable shortcomings in the quantity of process data currently available. Capital inputs were found to account for up to 22% of the total inputs to a particular product. These findings suggest that current best-practice methods are sufficiently accurate for most typical applications, but this is heavily dependent upon data quality and availability. The use of input-output data assists in improving the system boundary completeness of life-cycle inventories. However, the use of input-output analysis alone does not always provide an accurate model for replacing process data. Further improvements in the quantity of process data currently available are needed to increase the reliability of life-cycle inventories.

  19. Validation of spectrophotometric method for lactulose assay in syrup preparation

    NASA Astrophysics Data System (ADS)

    Mahardhika, Andhika Bintang; Novelynda, Yoshella; Damayanti, Sophi

    2015-09-01

    Lactulose is a synthetic disaccharide widely used in food and pharmaceutical fields. In the pharmaceutical field, lactulose is used as osmotic laxative in a syrup dosage form. This research was aimed to validate the spectrophotometric method to determine the levels of lactulose in syrup preparation and the commercial sample. Lactulose is hydrolyzed by hydrochloric acid to form fructose and galactose. The fructose was reacted with resorcinol reagent, forming compounds that give absorption peak at 485 nm. Analytical methods was validated, hereafter lactulose content in syrup preparation were determined. The calibration curve was linear in the range of 30-100 μg/mL with a correlation coefficient (r) of 0.9996, coefficient of variance (Vxo) of 1.1 %, limit of detection of 2.32 μg/mL, and limit of quantitation of 7.04 μg/mL. The result of accuracy test for the lactulose assay in the syrup preparation showed recoveries of 96.6 to 100.8 %. Repeatability test of lactulose assay in standard solution of lactulose and sample preparation syrup showed the coefficient of variation (CV) of 0.75 % and 0.7 %. Intermediate precision (interday) test resulted in coefficient of variation 1.06 % on the first day, the second day by 0.99 %, and 0.95 % for the third day. This research gave a valid analysis method and levels of lactulose in syrup preparations of samples A, B, C were 101.6, 100.5, and 100.6 %, respectively.

  20. Validation of two methods for fatty acids analysis in eggs.

    PubMed

    Mazalli, Mônica R; Bragagnolo, Neura

    2007-05-01

    A comparative study between two methods (lipid extraction followed by saponification and methylation, and direct methylation) to determine the fatty acids in egg yolk was evaluated. Direct methylation of the samples resulted in lower fatty acid content and greater variation in the results than the lipid extraction followed by saponification and methylation. The low repeatability observed for the direct HCl methylation method was probably due to a less efficient extraction and conversion of the fatty acids into their methyl esters as compared to the same procedure starting with the lipid extract. As the lipid extraction followed by esterification method was shown to be more precise it was validated using powdered egg certified as reference material (RM 8415, NIST) and applied to samples of egg, egg enriched with polyunsaturated omega-3 fatty acids (n-3 PUFA), and commercial spray-dried whole egg powder.

  1. Forward Modeling of Electromagnetic Methods Using General Purpose Finite Element Software

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2015-12-01

    Electromagnetic methods are widely used in mineral exploration and environmental applications and are increasingly being used in hydrocarbon exploration. Forward modeling of electromagnetic methods remains challenging and is mostly carried out using purpose-built research software. General purpose commercial modeling software has become increasingly flexible and powerful in recent years and is now capable of modeling field geophysical electromagnetic techniques. In this contribution, I will show examples of the use of commercial finite element modeling software Comsol Multiphysics for modeling frequency and time-domain electromagnetic techniques as well as for modeling the Very Low Frequency technique and magnetometric resistivity. Comparisons are made with analytical solutions, benchmark numerical solutions, analog experiments and field data. Although some calculations take too long to be practical as part of an inversion scheme, I suggest that modeling of this type will be useful for modeling novel techniques and for educational purposes.

  2. Method validation for methanol quantification present in working places

    NASA Astrophysics Data System (ADS)

    Muna, E. D. M.; Bizarri, C. H. B.; Maciel, J. R. M.; da Rocha, G. P.; de Araújo, I. O.

    2015-01-01

    Given the widespread use of methanol by different industry sectors and high toxicity associated with this substance, it is necessary to use an analytical method able to determine in a sensitive, precise and accurate levels of methanol in the air of working environments. Based on the methodology established by the National Institute for Occupational Safety and Health (NIOSH), it was validated a methodology for determination of methanol in silica gel tubes which had demonstrated its effectiveness based on the participation of the international collaborative program sponsored by the American Industrial Hygiene Association (AIHA).

  3. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation.

  4. Systematic method for the validation of long-term temperature measurements

    NASA Astrophysics Data System (ADS)

    Abdel-Jaber, H.; Glisic, B.

    2016-12-01

    Structural health monitoring (SHM) is the process of collecting and analyzing measurements of various structural and environmental parameters on a structure for the purpose of formulating conclusions on the performance and condition of the structure. Accurate long-term temperature data is critical for SHM applications as it is often used to compensate other measurements (e.g., strain), or to understand the thermal behavior of the structure. Despite the need for accurate long-term temperature data, there are currently no validation methods to ensure the accuracy of collected data. This paper researches and presents a novel method for the validation of long-term temperature measurements from any type of sensors. The method relies on modeling the dependence of temperature measurements inside a structure on the ambient temperature measurements collected from a reliable nearby weather tower. The model is then used to predict future measurements and assess whether or not future measurements conform to predictions. The paper presents both the model selection process, as well as the sensor malfunction detection process. To illustrate and validate the method, it is applied to data from a monitoring system installed on a real structure, Streicker Bridge on the Princeton University campus. Application of the method to data collected from about forty sensors over five years showed the potential of the method to categorize normal sensor function, as well as characterize sensor defect and minor drift.

  5. Establishing survey validity and reliability for American Indians through "think aloud" and test-retest methods.

    PubMed

    Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L; Burgess, Katherine M; Puumala, Susan E; Wilton, Georgiana; Hanson, Jessica D

    2015-06-01

    The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a "think aloud" methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test-retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test-retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. © The Author(s) 2015.

  6. The finite cell method for bone simulations: verification and validation.

    PubMed

    Ruess, Martin; Tal, David; Trabelsi, Nir; Yosibash, Zohar; Rank, Ernst

    2012-03-01

    Standard methods for predicting bone's mechanical response from quantitative computer tomography (qCT) scans are mainly based on classical h-version finite element methods (FEMs). Due to the low-order polynomial approximation, the need for segmentation and the simplified approach to assign a constant material property to each element in h-FE models, these often compromise the accuracy and efficiency of h-FE solutions. Herein, a non-standard method, the finite cell method (FCM), is proposed for predicting the mechanical response of the human femur. The FCM is free of the above limitations associated with h-FEMs and is orders of magnitude more efficient, allowing its use in the setting of computational steering. This non-standard method applies a fictitious domain approach to simplify the modeling of a complex bone geometry obtained directly from a qCT scan and takes into consideration easily the heterogeneous material distribution of the various bone regions of the femur. The fundamental principles and properties of the FCM are briefly described in relation to bone analysis, providing a theoretical basis for the comparison with the p-FEM as a reference analysis and simulation method of high quality. Both p-FEM and FCM results are validated by comparison with an in vitro experiment on a fresh-frozen femur.

  7. Experimental validation of boundary element methods for noise prediction

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Oswald, Fred B.

    1992-01-01

    Experimental validation of methods to predict radiated noise is presented. A combined finite element and boundary element model was used to predict the vibration and noise of a rectangular box excited by a mechanical shaker. The predicted noise was compared to sound power measured by the acoustic intensity method. Inaccuracies in the finite element model shifted the resonance frequencies by about 5 percent. The predicted and measured sound power levels agree within about 2.5 dB. In a second experiment, measured vibration data was used with a boundary element model to predict noise radiation from the top of an operating gearbox. The predicted and measured sound power for the gearbox agree within about 3 dB.

  8. Bioanalytical method development and validation: Critical concepts and strategies.

    PubMed

    Moein, Mohammad Mahdi; El Beqqali, Aziza; Abdel-Rehim, Mohamed

    2017-02-01

    Bioanalysis is an essential part in drug discovery and development. Bioanalysis is related to the analysis of analytes (drugs, metabolites, biomarkers) in biological samples and it involves several steps from sample collection to sample analysis and data reporting. The first step is sample collection from clinical or preclinical studies; then sending the samples to laboratory for analysis. Second step is sample clean-up (sample preparation) and it is very important step in bioanalysis. In order to reach reliable results, a robust and stable sample preparation method should be applied. The role of sample preparation is to remove interferences from sample matrix and improve analytical system performance. Sample preparation is often labor intensive and time consuming. Last step is the sample analysis and detection. For separation and detection, liquid chromatography-tandem mass spectrometry (LC-MS/MS) is method of choice in bioanalytical laboratories. This is due to high selectivity and high sensitivity of the LC-MS/MS technique. In addition the information about the analyte chemical structure and chemical properties is important to be known before the start of bioanalytical work. This review provides an overview of bioanalytical method development and validation. The main principles of method validation will be discussed. In this review GLP and regulated bioanalysis are described. Commonly used sample preparation techniques will be presented. In addition the role of LC-MS/MS in modern bioanalysis will be discussed. In the present review we have our focus on bioanalysis of small molecules. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Validated spectrophotometric methods for determination of some oral hypoglycemic drugs.

    PubMed

    Farouk, M; Abdel-Satar, O; Abdel-Aziz, O; Shaaban, M

    2011-02-01

    Four accurate, precise, rapid, reproducible, and simple spectrophotometric methods were validated for determination of repaglinide (RPG), pioglitazone hydrochloride (PGL) and rosiglitazone maleate (RGL). The first two methods were based on the formation of a charge-transfer purple-colored complex of chloranilic acid with RPG and RGL with a molar absorptivity 1.23 × 103 and 8.67 × 102 l•mol-1•cm-1 and a Sandell's sensitivity of 0.367 and 0.412 μg•cm-2, respectively, and an ion-pair yellow-colored complex of bromophenol blue with RPG, PGL and RGL with molar absorptivity 8.86 × 103, 6.95 × 103, and 7.06 × 103 l•mol-1•cm-1, respectively, and a Sandell's sensitivity of 0.051 μg•cm-2 for all ion-pair complexes. The influence of different parameters on color formation was studied to determine optimum conditions for the visible spectrophotometric methods. The other spectrophotometric methods were adopted for demtermination of the studied drugs in the presence of their acid-, alkaline- and oxidative-degradates by computing derivative and pH-induced difference spectrophotometry, as stability-indicating techniques. All the proposed methods were validated according to the International Conference on Harmonization guidelines and successfully applied for determination of the studied drugs in pure form and in pharmaceutical preparations with good extraction recovery ranges between 98.7-101.4%, 98.2-101.3%, and 99.9-101.4% for RPG, PGL, and RGL, respectively. Results of relative standard deviations did not exceed 1.6%, indicating that the proposed methods having good repeatability and reproducibility. All the obtained results were statistically compared to the official method used for RPG analysis and the manufacturers methods used for PGL and RGL analysis, respectively, where no significant differences were found.

  10. Validation and applications of an expedited tablet friability method.

    PubMed

    Osei-Yeboah, Frederick; Sun, Changquan Calvin

    2015-04-30

    The harmonized monograph on tablet friability test in United States Pharmacopeia (USP), European Pharmacopeia (Pharm. Eur.), and Japanese Pharmacopeia (JP) is designed to assess adequacy of mechanical strength of a batch of tablets. Currently, its potential applications in formulation development have been limited due to the batch requirement that is both labor and material intensive. To this end, we have developed an expedited tablet friability test method, using the existing USP test apparatus. The validity of the expedited friability method is established by showing that the friability data from the expedited method is not statistically different from those from the standard pharmacopeia method using materials of very different mechanical properties, i.e., microcrystalline cellulose and dibasic calcium phosphate dihydrate. Using the expedited friability method, we have shown that the relationship between tablet friability and tablet mechanical strength follows a power law expression. Furthermore, potential applications of this expedited friability test in facilitating systematic and efficient tablet formulation and tooling design are demonstrated with examples.

  11. The ICVSIE: A General Purpose Integral Equation Method for Bio-Electromagnetic Analysis.

    PubMed

    Gomez, Luis J; Yucel, Abdulkadir C; Michielssen, Eric

    2017-05-16

    An internally combined volume surface integral equation (ICVSIE) for analyzing electromagnetic (EM) interactions with biological tissue and wide ranging diagnostic, therapeutic, and research applications, is proposed. The ICVSIE is a system of integral equations in terms of volume and surface equivalent currents in biological tissue subject to fields produced by externally or internally positioned devices. The system is created by using equivalence principles and solved numerically; the resulting current values are used to evaluate scattered and total electric fields, specific absorption rates, and related quantities. The validity, applicability, and efficiency of the ICVSIE are demonstrated by EM analysis of transcranial magnetic stimulation, magnetic resonance imaging, and neuromuscular electrical stimulation. Unlike previous integral equations, the ICVSIE is stable regardless of the electric permittivities of the tissue or frequency of operation, providing an application-agnostic computational framework for EM-biomedical analysis. Use of the general-purpose and robust ICVSIE permits streamlining the development, deployment, and safety analysis of EM-biomedical technologies.

  12. Examining the Content Validity of the WHOQOL-BRF from Respondents' Perspective by Quantitative Methods

    ERIC Educational Resources Information Center

    Yao, Grace; Wu, Chia-Huei; Yang, Cheng-Ta

    2008-01-01

    Content validity, the extent to which a measurement reflects the specific intended domain of content, is a basic type of validity for a valid measurement. It was usually examined qualitatively and relied on experts' subjective judgments, not on respondents' responses. Therefore, the purpose of this study was to introduce and demonstrate how to use…

  13. Methods for detecting residues of cleaning agents during cleaning validation.

    PubMed

    Westman, L; Karlsson, G

    2000-01-01

    Cleaning validation procedures are carried out in order to assure that residues of cleaning agents are within acceptable limits after the cleaning process. Cleaning agents often consist of a mixture of various surfactants which are in a highly diluted state after the water rinsing procedure has been completed. This makes it difficult to find appropriate analytical methods that are sensitive enough to detect the cleaning agents. In addition, it is advantageous for the analytical methods to be simple to perform and to give results quickly. In this study, four different analytical methods are compared: visual detection of foam, pH, conductivity measurements, and analysis of total organic carbon (TOC). TOC was used as a reference method when evaluating the other three potential methods. The analyses were performed on different dilutions of the cleaning agents Vips Neutral, RBS-25, Debisan and Perform. The results demonstrated that the most sensitive method for analysis of Vips Neutral, Debisan and Perform is visual detection of foam, by which it is possible to detect concentrations of cleaning agents down to 10 micrograms/mL. RBS-25 was not detected below 200 micrograms/mL, probably because it is formulated with low-foaming surfactants. TOC analysis is less sensitive but has the advantage of being a quantitative analysis, while visual detection of foam is a semi-quantitative method. Visual detection of foam is easy to perform, gives a quick result, and requires no expensive instrumentation. The sensitivity of each method was found to be dependent upon the type of cleaning agent that was analyzed.

  14. Validity and applicability of a new recording method for hypertension.

    PubMed

    Mas-Heredia, Minerva; Molés-Moliner, Eloisa; González-de Paz, Luis; Kostov, Belchin; Ortiz-Molina, Jacinto; Mauri-Vázquez, Vanesa; Menacho-Pascual, Ignacio; Cararach-Salami, Daniel; Sierra-Benito, Cristina; Sisó-Almirall, Antoni

    2014-09-01

    Blood pressure measurement methods and conditions are determinants of hypertension diagnosis. A recent British guideline recommends systematic 24-h ambulatory blood pressure monitoring. However, these devices are not available at all health centers and they can only be used by 1 patient per day. The aim of this study was to test a new blood pressure recording method to see if it gave the same diagnostic results as 24-h blood pressure monitoring. One-hour blood pressure monitoring under routine clinical practice conditions was compared with standard method of day time recording by analyzing the coefficient of correlation and Bland-Altman plots. The Kappa index was used to calculate degree of agreement. Method sensitivity and specificity were also analyzed. Of the 102 participants, 89 (87.3%) obtained the same diagnosis regardless of method, with high between-method agreement (κ= 0.81; 95% confidence interval, 0.71-0.91). We observed robust correlations between diastolic (r=0.85) and systolic blood pressure (r=0.76) readings. Sensitivity and specificity for the new method for diagnosing white coat hypertension were 85.2% (95% confidence interval 67.5%-94.1%) and 92% (95% confidence interval, 83.6%-96.3%), respectively. One-hour blood pressure monitoring is a valid and reliable method for diagnosing hypertension and for classifying hypertension subpopulations, especially in white coat hypertension and refractory hypertension. This also leads to a more productive use of monitoring instruments. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  15. Validated spectrofluorometric methods for determination of amlodipine besylate in tablets

    NASA Astrophysics Data System (ADS)

    Abdel-Wadood, Hanaa M.; Mohamed, Niveen A.; Mahmoud, Ashraf M.

    2008-08-01

    Two simple and sensitive spectrofluorometric methods have been developed and validated for determination of amlodipine besylate (AML) in tablets. The first method was based on the condensation reaction of AML with ninhydrin and phenylacetaldehyde in buffered medium (pH 7.0) resulting in formation of a green fluorescent product, which exhibits excitation and emission maxima at 375 and 480 nm, respectively. The second method was based on the reaction of AML with 7-chloro-4-nitro-2,1,3-benzoxadiazole (NBD-Cl) in a buffered medium (pH 8.6) resulting in formation of a highly fluorescent product, which was measured fluorometrically at 535 nm ( λex, 480 nm). The factors affecting the reactions were studied and optimized. Under the optimum reaction conditions, linear relationships with good correlation coefficients (0.9949-0.9997) were found between the fluorescence intensity and the concentrations of AML in the concentration range of 0.35-1.8 and 0.55-3.0 μg ml -1 for ninhydrin and NBD-Cl methods, respectively. The limits of assays detection were 0.09 and 0.16 μg ml -1 for the first and second method, respectively. The precisions of the methods were satisfactory; the relative standard deviations were ranged from 1.69 to 1.98%. The proposed methods were successfully applied to the analysis of AML in pure and pharmaceutical dosage forms with good accuracy; the recovery percentages ranged from 100.4-100.8 ± 1.70-2.32%. The results were compared favorably with those of the reported method.

  16. Validating an efficient method to quantify motion sickness.

    PubMed

    Keshavarz, Behrang; Hecht, Heiko

    2011-08-01

    Motion sickness (MS) can be a debilitating side effect associated with motion in real or virtual environments. We analyzed the effect of expectancy on MS and propose and validate a fast and simple MS measure. Several questionnaires measure MS before or after stimulus presentation, but no satisfactory tool has been established to quickly capture MS data during exposure. To fill this gap, we introduce the Fast MS Scale (FMS), a verbal rating scale ranging from zero (no sickness at all) to 20 (frank sickness). Also, little is known about the role of expectancy effects in MS studies. We conducted an experiment that addressed this issue. For this study, 126 volunteers participated in two experiments. During stimulus presentation, participants had to verbally rate the severity of MS every minute before filling in the Simulator Sickness Questionnaire (SSQ). To measure expectancy effects, participants were separated into three groups with either positive, negative, or neutral expectations. We compared the verbal ratings with the SSQ scores. Pearson correlations were high for both the SSQ total score (r = .785) and the nausea subscore (r = .828). No expectancy effects were found. The FMS is a fast and valid method to obtain MS data. It offers the possibility to record MS during stimulus presentation and to capture its time course. We found expectancy not to play a crucial role in MS. However, the FMS has some limitations. The FMS offers improved MS measurement. It is fast and efficient and can be performed online in environments such as virtual reality.

  17. Using formal methods for content validation of medical procedure documents.

    PubMed

    Cota, Érika; Ribeiro, Leila; Bezerra, Jonas Santos; Costa, Andrei; da Silva, Rosiana Estefane; Cota, Gláucia

    2017-08-01

    We propose the use of a formal approach to support content validation of a standard operating procedure (SOP) for a therapeutic intervention. Such an approach provides a useful tool to identify ambiguities, omissions and inconsistencies, and improves the applicability and efficacy of documents in the health settings. We apply and evaluate a methodology originally proposed for the verification of software specification documents to a specific SOP. The verification methodology uses the graph formalism to model the document. Semi-automatic analysis identifies possible problems in the model and in the original document. The verification is an iterative process that identifies possible faults in the original text that should be revised by its authors and/or specialists. The proposed method was able to identify 23 possible issues in the original document (ambiguities, omissions, redundant information, and inaccuracies, among others). The formal verification process aided the specialists to consider a wider range of usage scenarios and to identify which instructions form the kernel of the proposed SOP and which ones represent additional or required knowledge that are mandatory for the correct application of the medical document. By using the proposed verification process, a simpler and yet more complete SOP could be produced. As consequence, during the validation process the experts received a more mature document and could focus on the technical aspects of the procedure itself. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Spectrum-transformed sequential testing method for signal validation applications

    SciTech Connect

    Gross, K.C.; Hoyer, K.K.

    1992-06-01

    The Sequential Probability Ratio Test (SPRT) has proven to be a valuable tool in a variety of reactor applications for signal validation and for sensor and equipment operability surveillance. One drawback of the conventional SPRT method is that its domain of application is limited to signals that are contaminated by gaussian white noise. Nongaussian process variables contaminated by serial correlation can produce higher-than-specified rates of false alarms and missed alarms for SPRT-based surveillance systems. To overcome this difficulty we present here the development and computer implementation of a new technique, the spectrum-transformed sequential testing method. This method retains the excellent surveillance advantage of the SPRT (extremely high sensitivity for very early annunciation of the onset of disturbances in monitored signals), and its false-alarm and missed-alarm probabilities are unaffected by the presence of serial correlation in the data. Example applications of the new method to serially-correlated reactor variables are demonstrated using data recorded from EBR-II.

  19. Spectrum-transformed sequential testing method for signal validation applications

    SciTech Connect

    Gross, K.C. ); Hoyer, K.K. . Dept. of Industrial Engineering and Management Sciences)

    1992-01-01

    The Sequential Probability Ratio Test (SPRT) has proven to be a valuable tool in a variety of reactor applications for signal validation and for sensor and equipment operability surveillance. One drawback of the conventional SPRT method is that its domain of application is limited to signals that are contaminated by gaussian white noise. Nongaussian process variables contaminated by serial correlation can produce higher-than-specified rates of false alarms and missed alarms for SPRT-based surveillance systems. To overcome this difficulty we present here the development and computer implementation of a new technique, the spectrum-transformed sequential testing method. This method retains the excellent surveillance advantage of the SPRT (extremely high sensitivity for very early annunciation of the onset of disturbances in monitored signals), and its false-alarm and missed-alarm probabilities are unaffected by the presence of serial correlation in the data. Example applications of the new method to serially-correlated reactor variables are demonstrated using data recorded from EBR-II.

  20. The Dmax method is a valid procedure to estimate physical working capacity at fatigue threshold.

    PubMed

    Riffe, Joshua J; Stout, Jeffrey R; Fukuda, David H; Robinson, Edward H; Miramonti, Amelia A; Beyer, Kyle S; Wang, Ran; Church, David D; Muddle, Tyler W D; Hoffman, Jay R

    2017-03-01

    The purpose of this study was to determine the validity of the maximal distance-electromyography (Dmax-EMG) method for estimating physical working capacity at fatigue threshold (PWCFT ). Twenty-one men and women (age 22.9 ± 3.0 years) volunteered to perform 12 sessions of high-intensity interval training (HIIT) over 4 weeks. Before and after HIIT training, a graded exercise test (GXT) was used to estimate PWCFT using the Dmax method and the original (ORG) method. There was a significant increase in PWCFT for both ORG (+10.6%) and Dmax (+12.1%) methods, but no significant difference in the change values between methods. Further, Bland-Altman analyses resulted in non-significant biases (ORG-Dmax) between methods at pre-HIIT (-6.4 ± 32.5 W; P > 0.05) and post-HIIT (-4.2 ± 33.1 W; P > 0.05). The Dmax method is sensitive to training and is a valid method for estimating PWCFT in young men and women. Muscle Nerve 55: 344-349, 2017. © 2016 Wiley Periodicals, Inc.

  1. Key aspects of analytical method validation and linearity evaluation.

    PubMed

    Araujo, Pedro

    2009-08-01

    Method validation may be regarded as one of the most well-known areas in analytical chemistry as is reflected in the substantial number of articles submitted and published in peer review journals every year. However, some of the relevant parameters recommended by regulatory bodies are often used interchangeably and incorrectly or are miscalculated, due to few references to evaluate some of the terms as well as wrong application of the mathematical and statistical approaches used in their estimation. These mistakes have led to misinterpretation and ambiguity in the terminology and in some instances to wrong scientific conclusions. In this article, the definitions of various relevant performance indicators such as selectivity, specificity, accuracy, precision, linearity, range, limit of detection, limit of quantitation, ruggedness, and robustness are critically discussed with a view to prevent their erroneous usage and ensure scientific correctness and consistency among publications.

  2. Validated HPLC method for quantifying permethrin in pharmaceutical formulations.

    PubMed

    García, E; García, A; Barbas, C

    2001-03-01

    An isocratic HPLC method for permethrin determination in raw material and pharmaceutical presentations as lotion and shampoo has been developed and validated following ICH recommendations. Cis and trans- isomers, impurities and degradation products are well separated. The chromatographic analysis were performed on a 4 microm particle C-18 Nova-Pak (Waters, Madrid, Spain) column (15 x 0.39 cm) kept in a Biorad column oven at 35 degrees C. Mobile phase consisted of methanol--water (78:22, v/v) at a flow rate of 1 ml/min. UV detection was performed at 272 nm and peaks were identified with retention times as compared with standards and confirmed with characteristic spectra using the photodiode array detector.

  3. Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated

    ERIC Educational Resources Information Center

    Morell, Linda; Tan, Rachael Jin Bee

    2009-01-01

    Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…

  4. Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated

    ERIC Educational Resources Information Center

    Morell, Linda; Tan, Rachael Jin Bee

    2009-01-01

    Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…

  5. Validation of a digital PCR method for quantification of DNA copy number concentrations by using a certified reference material.

    PubMed

    Deprez, Liesbet; Corbisier, Philippe; Kortekaas, Anne-Marie; Mazoua, Stéphane; Beaz Hidalgo, Roxana; Trapmann, Stefanie; Emons, Hendrik

    2016-09-01

    Digital PCR has become the emerging technique for the sequence-specific detection and quantification of nucleic acids for various applications. During the past years, numerous reports on the development of new digital PCR methods have been published. Maturation of these developments into reliable analytical methods suitable for diagnostic or other routine testing purposes requires their validation for the intended use. Here, the results of an in-house validation of a droplet digital PCR method are presented. This method is intended for the quantification of the absolute copy number concentration of a purified linearized plasmid in solution with a nucleic acid background. It has been investigated which factors within the measurement process have a significant effect on the measurement results, and the contribution to the overall measurement uncertainty has been estimated. A comprehensive overview is provided on all the aspects that should be investigated when performing an in-house method validation of a digital PCR method.

  6. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  7. A validated stability indicating LC method for oxcarbazepine.

    PubMed

    Pathare, D B; Jadhav, A S; Shingare, M S

    2007-04-11

    The present paper describes the development of a stability indicating reversed phase liquid chromatographic (RPLC) method for oxcarbazepine in the presence of its impurities and degradation products generated from forced decomposition studies. The drug substance was subjected to stress conditions of hydrolysis, oxidation, photolysis and thermal degradation. The degradation of oxcarbazepine was observed under base hydrolysis. The drug was found to be stable to other stress conditions attempted. Successful separation of the drug from the synthetic impurities and degradation product formed under stress conditions was achieved on a C18 column using mixture of aqueous 0.02 M potassium dihydrogen phosphate-acetonitrile-methanol (45:35:20, v/v/v) as mobile phase. The developed HPLC method was validated with respect to linearity, accuracy, precision, specificity and robustness. The developed HPLC method to determine the related substances and assay determination of oxcarbazepine can be used to evaluate the quality of regular production samples. It can be also used to test the stability samples of oxcarbazepine.

  8. [Validation of a prediction method for indoor air pollution].

    PubMed

    Qu, Hong-juan; Qin, Hua-peng; Yao, Ting-ting; Luan, Sheng-ji

    2008-02-01

    The validation study of the prediction method for indoor air pollution was carried out by comparing the results of emission models based on data obtained in a large and a small emission chamber, with actual measured concentrations. A new decorated room was studied as a case. Emissions of complicated objects and simple surface layer materials were studied respectively in the large and small chamber and emission models were developed. Those models were based on the assumptions regarding mass conservation of substances and the hypothesis that pollutants were well mixed. The emissions of formaldehyde and TVOC (total volatile organic compounds) in the studied room were predicted by the method. The predicted concentration trend of pollutants was in accordance with the measured trend when some air exchange (0.03 ACH, air change per hour) was taken into account. The normalized standard errors of formaldehyde and TVOC pollution prediction were respectively 2.8% and 1.6%. Modeling analysis shows that the contribution to total formaldehyde pollution of the studied room was: furniture > paint > floor; the contribution to total TVOC pollution was: paint > floor > furniture. The results lead to the conclusion that this prediction method can well describe the pollution trend, can assess the contribution of different sources, can guide the choice of building materials and is an effective tool for indoor air pollution assessment and control.

  9. Formal methods and their role in digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1995-01-01

    This report is based on one prepared as a chapter for the FAA Digital Systems Validation Handbook (a guide to assist FAA certification specialists with advanced technology issues). Its purpose is to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used in critical applications; and to suggest factors for consideration when formal methods are offered in support of certification. The presentation concentrates on the rationale for formal methods and on their contribution to assurance for critical applications within a context such as that provided by DO-178B (the guidelines for software used on board civil aircraft); it is intended as an introduction for those to whom these topics are new.

  10. Should the AOAC use-dilution method be continued for regulatory purposes?

    PubMed

    Omidbakhsh, Navid

    2012-01-01

    Despite its very poor reproducibility, AOAC INTERNATIONAL's use-dilution method (UDM) for bactericidal activity (AOAC Methods 964.02, 955.14, and 955.15) has been required by the U.S. Environmental Protection Agency (EPA) since 1953 for regulatory purposes, while methods with better reproducibility have been adopted in Canada and Australia. This study reviews UDM from a statistical perspective. Additionally, the test's expected results were compared to those obtained from actual evaluation of several formulations. Significant gaps have been identified in the reproducibility of the test data as predicted by statistical analysis and those presented to the EPA for product registration. UDM's poor reproducibility, along with its qualitative nature, requires the concentration of the active ingredient to be high enough to ensure all or most carriers to be free of any viable organisms. This is not in accord with the current trends towards sustainability, human safety, and environmental protection. It is recommended that the use of the method for regulatory purposes be phased out as soon as possible, and methods with better design and reproducibility be adopted instead.

  11. A Toxocara cati eggs concentration method from cats' faeces, for experimental and diagnostic purposes.

    PubMed

    Cardillo, N; Sommerfelt, I; Fariña, F; Pasqualetti, M; Pérez, M; Ercole, M; Rosa, A; Ribicich, M

    2014-09-01

    Toxocariosis is a zoonotic parasite infection worldwide distributed, now considered a neglected disease associated to poverty. For experimental infection in animals and to develop the diagnosis in humans it is necessary to obtain large number of Toxocara spp. larval eggs. Toxocara cati eggs recovered percentage from faeces of infected cats was determined employing a novel egg concentration method. The McMaster egg counting technique and the concentration method were applied on 20 positive cats' sample faeces obtained from naturally infected cats. The mean percentage of eggs recovered by the concentration method was 24.37% higher than the count obtained by McMaster egg counting technique. The main advantage of this method is that it can be obtained a small final volume with a high number of recovered eggs and a good quality inoculum for experimental and diagnostic purposes. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. A chart-based method for identification of delirium: validation compared with interviewer ratings using the confusion assessment method.

    PubMed

    Inouye, Sharon K; Leo-Summers, Linda; Zhang, Ying; Bogardus, Sidney T; Leslie, Douglas L; Agostini, Joseph V

    2005-02-01

    To validate a chart-based method for identification of delirium and compare it with direct interviewer assessment using the Confusion Assessment Method (CAM). Prospective validation study. Teaching hospital. Nine hundred nineteen older hospitalized patients. A chart-based instrument for identification of delirium was created and compared with the reference standard interviewer ratings, which used direct cognitive assessment to complete the CAM for delirium. Trained nurse chart abstractors were blinded to all interview data, including cognitive and CAM ratings. Factors influencing the correct identification of delirium in the chart were examined. Delirium was present in 115 (12.5%) patients according to the CAM. Sensitivity of the chart-based instrument was 74%, specificity was 83%, and likelihood ratio for a positive result was 4.4. Overall agreement between chart and interviewer ratings was 82%, kappa=0.41. By contrast, using International Classification of Diseases, Ninth Revision, Clinical Modification, administrative codes, the sensitivity for delirium was 3%, and specificity was 99%. Independent factors associated with incorrect chart identification of delirium were dementia, severe illness, and high baseline delirium risk. With all three factors present, the chart instrument was three times more likely to identify patients incorrectly than with none of the factors present. A chart-based instrument for delirium, which should be useful for patient safety and quality-improvement programs in older persons, was validated. Because of potential misclassification, the chart-based instrument is not recommended for individual patient care or diagnostic purposes.

  13. 76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... . The TTN provides information and technology exchange in various areas of air pollution control. A... rules that limit air pollution emission limits. K. Congressional Review Act The Congressional Review Act... protection, Alternative test method, Air pollution control, Field validation, Hazardous air...

  14. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  15. HPLC method development and validation of chromafenozide in paddy.

    PubMed

    Ditya, Papia; Das, S P; Bhattacharyya, Anjan

    2012-12-01

    A simple and efficient HPLC-UV method was developed and validated for determination of chromafenozide in paddy as there was no previous report on record in this regard. The residue analysis method of chromafenozide, its dissipation and final residue in paddy along with soil were also studied after field treatment. Residues of chromafenozide were extracted and purified from paddy and soil followed by liquid/liquid partitioning, chromatographic column and determination by HPLC equipped with PDA detector. The separation was performed on a Phenomenex Luna RP C(18) (250 × 4.6 mm i.d, 5 μm particle size) column at room temperature. The mean accuracy of analytical method were 94.92 %, 95.38 %, 94.67 % and 96.90 % in straw, grain, soil and field water respectively. The precision (repeatability) was found in the range of 1.30 %-9.25 % for straw/grain, 1.27 %-11.19 % in soil; 1.0 %-9.25 % in field water. The precision (reproducibility) in straw/grain was ranging from 2.2 % to 12.1 %, in soil it from 2.0 % to 11.7 %. The minimum detectable concentration was 0.01 mg kg(-1). The degradation of chromafenozide formulation in rice, soil and water was determined and results showed that chromafenozide as wettable powder formulation degraded with the half-lives of about 4.4 and 2.9 days in paddy plant and soil respectively for double recommended dose. The results indicated that the developed method is easier and faster then could meet the requirements for determination of chromafenozide in paddy.

  16. Determination of formaldehyde in food and feed by an in-house validated HPLC method.

    PubMed

    Wahed, P; Razzaq, Md A; Dharmapuri, S; Corrales, M

    2016-07-01

    Formalin is carcinogenic and is detrimental to public health. The illegal addition of formalin (37% formaldehyde and 14% methanol) to foods to extend their shelf-life is considered to be a common practice in Bangladesh. The lack of accurate methods and the ubiquitous presence of formaldehyde in foods make the detection of illegally added formalin challenging. With the aim of helping regulatory authorities, a sensitive high performance liquid chromatography method was validated for the quantitative determination of formaldehyde in mango, fish and milk. The method was fit-for-purpose and showed good analytical performance in terms of specificity, linearity, precision, recovery and robustness. The expanded uncertainty was <35%. The validated method was applied to screen samples of fruits, vegetables, fresh fish, milk and fish feed collected from different local markets in Dhaka, Bangladesh. Levels of formaldehyde in food samples were compared with published data. The applicability of the method in different food matrices might mean it has potential as a reference standard method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. New validated method for piracetam HPLC determination in human plasma.

    PubMed

    Curticapean, Augustin; Imre, Silvia

    2007-01-10

    The new method for HPLC determination of piracetam in human plasma was developed and validated by a new approach. The simple determination by UV detection was performed on supernatant, obtained from plasma, after proteins precipitation with perchloric acid. The chromatographic separation of piracetam under a gradient elution was achieved at room temperature with a RP-18 LiChroSpher 100 column and aqueous mobile phase containing acetonitrile and methanol. The quantitative determination of piracetam was performed at 200 nm with a lower limit of quantification LLQ=2 microg/ml. For this limit, the calculated values of the coefficient of variation and difference between mean and the nominal concentration are CV%=9.7 and bias%=0.9 for the intra-day assay, and CV%=19.1 and bias%=-7.45 for the between-days assay. For precision, the range was CV%=1.8/11.6 in the intra-day and between-days assay, and for accuracy, the range was bias%=2.3/14.9 in the intra-day and between-days assay. In addition, the stability of piracetam in different conditions was verified. Piracetam proved to be stable in plasma during 4 weeks at -20 degrees C and for 36 h at 20 degrees C in the supernatant after protein precipitation. The new proposed method was used for a bioequivalence study of two medicines containing 800 mg piracetam.

  18. Comparison of manual and automated nucleic acid extraction methods from clinical specimens for microbial diagnosis purposes.

    PubMed

    Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia

    2016-11-01

    The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. RELIABILITY AND VALIDITY OF A BIOMECHANICALLY BASED ANALYSIS METHOD FOR THE TENNIS SERVE

    PubMed Central

    Kibler, W. Ben; Lamborn, Leah; Smith, Belinda J.; English, Tony; Jacobs, Cale; Uhl, Tim L.

    2017-01-01

    Background An observational tennis serve analysis (OTSA) tool was developed using previously established body positions from three-dimensional kinematic motion analysis studies. These positions, defined as nodes, have been associated with efficient force production and minimal joint loading. However, the tool has yet to be examined scientifically. Purpose The primary purpose of this investigation was to determine the inter-observer reliability for each node between two health care professionals (HCPs) that developed the OTSA, and secondarily to investigate the validity of the OTSA. Methods Two separate studies were performed to meet these objectives. An inter-observer reliability study preceded the validity study by examining 28 videos of players serving. Two HCPs graded each video and scored the presence or absence of obtaining each node. Discriminant validity was determined in 33 tennis players using video taped records of three first serves. Serve mechanics were graded using the OSTA and categorized players into those with good ( ≥ 5) and poor ( ≤ 4) mechanics. Participants performed a series of field tests to evaluate trunk flexibility, lower extremity and trunk power, and dynamic balance. Results The group with good mechanics demonstrated greater backward trunk flexibility (p=0.02), greater rotational power (p=0.02), and higher single leg countermovement jump (p=0.05). Reliability of the OTSA ranged from K = 0.36-1.0, with the majority of all the nodes displaying substantial reliability (K>0.61). Conclusion This study provides HCPs with a valid and reliable field tool used to assess serve mechanics. Physical characteristics of trunk mobility and power appear to discriminate serve mechanics between players. Future intervention studies are needed to determine if improvement in physical function contribute to improved serve mechanics. Level of Evidence 3 PMID:28593098

  20. Validation of a method to measure total spontaneous physical activity of sedentary and voluntary running mice.

    PubMed

    Silvennoinen, M; Rantalainen, T; Kainulainen, H

    2014-09-30

    Running wheels are commonly used to stimulate physical activity of mice. To control the effects of physical activity on study results, it is important to measure the total activity (all movements) of both sedentary and running wheel stimulated mice. Because there was a lack of a validated system, we built a force-plate based system specifically for this purpose. The validity of the system and its variables (activity index, activity time and distance) were tested in calibration measurements and in situ by measuring the activity of eight mice both with and without running wheels. Four mice served as sedentary controls. Activity index adds changes in vertical reaction forces induced by moving mice. The system records simultaneously all the activity, thus the wheel running is not distinguished from other activity. There were very strong associations between measured activity variables and their true values (R(2)=1, p<0.01). The mean differences to true values were: activity index -9.7% (95% limits of agreement (LOA), -28.7 to 9.4%), activity time +0.9% (LOA, -1.3 to 3.0%) and distance +0.7% (LOA, -4.7 to 6.1%). The running wheels increased activity index 211 ± 40% (mean ± SE), activity time 39 ± 3% and activity intensity 94 ± 16%. Activity index (R(2)=0.982, p<0.01), activity time (R(2)=0.618, p<0.01) and intensity (R(2)=0.920, p<0.01) were positively associated with running distance. To our knowledge, this is the first method properly validated for this purpose. The system is valid for the quantitation of total physical activity of mice housed in cages with or without running wheels. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method

    PubMed Central

    2017-01-01

    Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant

  2. An Investigation of Pre-Service Middle School Mathematics Teachers' Ability to Conduct Valid Proofs, Methods Used, and Reasons for Invalid Arguments

    ERIC Educational Resources Information Center

    Demiray, Esra; Isiksal Bostan, Mine

    2017-01-01

    The purposes of this study are to investigate Turkish pre-service middle school mathematics teachers' ability in conducting valid proofs for statements regarding numbers and algebra in terms of their year of enrollment in a teacher education program, to determine the proof methods used in their valid proofs, and to examine the reasons for their…

  3. Validation of the Benefit Forecasting Method: Organization Development Program to Increase Health Organization Membership. Training and Development Research Center, Project Number Eleven.

    ERIC Educational Resources Information Center

    Sleezer, Catherine M.; And Others

    This project is the sixth in a series of studies designed to validate the Training and Development Benefit Forecasting Method (BFM) sponsored by the Training and Development Research Center (TDRC) at the University of Minnesota. The purpose of this study was to validate the BFM's ability to forecast the benefits of an organization development…

  4. Methods to validate tooth-supporting regenerative therapies.

    PubMed

    Padial-Molina, Miguel; Marchesan, Julie T; Taut, Andrei D; Jin, Qiming; Giannobile, William V; Rios, Hector F

    2012-01-01

    In humans, microbially induced inflammatory periodontal diseases are the primary initiators that disrupt the functional and structural integrity of the periodontium (i.e., the alveolar bone, the periodontal ligament, and the cementum). The reestablishment of its original structure, properties, and function constitutes a significant challenge in the development of new therapies to regenerate tooth-supporting defects. Preclinical models represent an important in vivo tool to critically evaluate and analyze the key aspects of novel regenerative therapies, including (1) safety, (2) effectiveness, (3) practicality, and (4) functional and structural stability over time. Therefore, these models provide foundational data that supports the clinical validation and the development of novel innovative regenerative periodontal technologies. Steps are provided on the use of the root fenestration animal model for the proper evaluation of periodontal outcome measures using the following parameters: descriptive histology, histomorphometry, immunostaining techniques, three-dimensional imaging, electron microscopy, gene expression analyses, and safety assessments. These methods will prepare investigators and assist them in identifying the key end points that can then be adapted to later stage human clinical trials.

  5. How to qualify and validate wear simulation devices and methods.

    PubMed

    Heintze, S D

    2006-08-01

    The clinical significance of increased wear can mainly be attributed to impaired aesthetic appearance and/or functional restrictions. Little is known about the systemic effects of swallowed or inhaled worn particles that derive from restorations. As wear measurements in vivo are complicated and time-consuming, wear simulation devices and methods had been developed without, however, systematically looking at the factors that influence important wear parameters. Wear simulation devices shall simulate processes that occur in the oral cavity during mastication, namely force, force profile, contact time, sliding movement, clearance of worn material, etc. Different devices that use different force actuator principles are available. Those with the highest citation frequency in the literature are - in descending order - the Alabama, ACTA, OHSU, Zurich and MTS wear simulators. When following the FDA guidelines on good laboratory practice (GLP) only the expensive MTS wear simulator is a qualified machine to test wear in vitro; the force exerted by the hydraulic actuator is controlled and regulated during all movements of the stylus. All the other simulators lack control and regulation of force development during dynamic loading of the flat specimens. This may be an explanation for the high coefficient of variation of the results in some wear simulators (28-40%) and the poor reproducibility of wear results if dental databases are searched for wear results of specific dental materials (difference of 22-72% for the same material). As most of the machines are not qualifiable, wear methods applying the machine may have a sound concept but cannot be validated. Only with the MTS method have wear parameters and influencing factors been documented and verified. A good compromise with regard to costs, practicability and robustness is the Willytec chewing simulator, which uses weights as force actuator and step motors for vertical and lateral movements. The Ivoclar wear method run on

  6. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  7. In pursuit of a valid Information Assessment Method for continuing education: a mixed methods study.

    PubMed

    Bindiganavile Sridhar, Soumya; Pluye, Pierre; Grad, Roland

    2013-10-07

    The Information Assessment Method (IAM) is a popular tool for continuing education and knowledge translation. After a search for information, the IAM allows the health professional to report what was the search objective, its cognitive impact, as well as any use and patient health benefit associated with the retrieved health information. In continuing education programs, professionals read health information, rate it using the IAM, and earn continuing education credit for this brief individual reflective learning activity. IAM items have been iteratively developed using literature reviews and qualitative studies. Thus, our research question was: what is the content validity of IAM items from the users' perspective? A two-step content validation study was conducted. In Step 1, we followed a mixed methods research design, and assessed the relevance and representativeness of IAM items. In this step, data from a longitudinal quantitative study and a qualitative multiple case study involving 40 family physicians were analyzed. In Step 2, IAM items were analyzed and modified based on a set of guiding principles by a multi-disciplinary expert panel. The content validity of 16 IAM items was supported, and these items were not changed. Nine other items were modified. Three new items were added, including two that were extensions of an existing item. A content validated version of the IAM (IAM 2011) is available for the continuing education of health professionals.

  8. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    PubMed

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n

  9. A simple method to generate adipose stem cell-derived neurons for screening purposes.

    PubMed

    Bossio, Caterina; Mastrangelo, Rosa; Morini, Raffaella; Tonna, Noemi; Coco, Silvia; Verderio, Claudia; Matteoli, Michela; Bianco, Fabio

    2013-10-01

    Strategies involved in mesenchymal stem cell (MSC) differentiation toward neuronal cells for screening purposes are characterized by quality and quantity issues. Differentiated cells are often scarce with respect to starting undifferentiated population, and the differentiation process is usually quite long, with high risk of contamination and low yield efficiency. Here, we describe a novel simple method to induce direct differentiation of MSCs into neuronal cells, without neurosphere formation. Differentiated cells are characterized by clear morphological changes, expression of neuronal specific markers, showing functional response to depolarizing stimuli and electrophysiological properties similar to those of developing neurons. The method described here represents a valuable tool for future strategies aimed at personalized screening of therapeutic agents in vitro.

  10. Method development and validation for pharmaceutical tablets analysis using transmission Raman spectroscopy.

    PubMed

    Li, Yi; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2016-02-10

    The objective of the study is to demonstrate the development and validation of a transmission Raman spectroscopic method using the ICH-Q2 Guidance as a template. Specifically, Raman spectroscopy was used to determine niacinamide content in tablet cores. A 3-level, 2-factor full factorial design was utilized to generate a partial least-squares model for active pharmaceutical ingredient quantification. Validation of the transmission Raman model was focused on figures of merit from three independent batches manufactured at pilot scale. The resultant model statistics were evaluated along with the linearity, accuracy, precision and robustness assessments. Method specificity was demonstrated by accurate determination of niacinamide in the presence of niacin (an expected related substance). The method was demonstrated as fit for purpose and had the desirable characteristics of very short analysis times (∼2.5s per tablet). The resulting method was used for routine content uniformity analysis of single dosage units in a stability study. Copyright © 2015. Published by Elsevier B.V.

  11. Making clinical trials more relevant: improving and validating the PRECIS tool for matching trial design decisions to trial purpose.

    PubMed

    Loudon, Kirsty; Zwarenstein, Merrick; Sullivan, Frank; Donnan, Peter; Treweek, Shaun

    2013-04-27

    If you want to know which of two or more healthcare interventions is most effective, the randomised controlled trial is the design of choice. Randomisation, however, does not itself promote the applicability of the results to situations other than the one in which the trial was done. A tool published in 2009, PRECIS (PRagmatic Explanatory Continuum Indicator Summaries) aimed to help trialists design trials that produced results matched to the aim of the trial, be that supporting clinical decision-making, or increasing knowledge of how an intervention works. Though generally positive, groups evaluating the tool have also found weaknesses, mainly that its inter-rater reliability is not clear, that it needs a scoring system and that some new domains might be needed. The aim of the study is to: Produce an improved and validated version of the PRECIS tool. Use this tool to compare the internal validity of, and effect estimates from, a set of explanatory and pragmatic trials matched by intervention. The study has four phases. Phase 1 involves brainstorming and a two-round Delphi survey of authors who cited PRECIS. In Phase 2, the Delphi results will then be discussed and alternative versions of PRECIS-2 developed and user-tested by experienced trialists. Phase 3 will evaluate the validity and reliability of the most promising PRECIS-2 candidate using a sample of 15 to 20 trials rated by 15 international trialists. We will assess inter-rater reliability, and raters' subjective global ratings of pragmatism compared to PRECIS-2 to assess convergent and face validity. Phase 4, to determine if pragmatic trials sacrifice internal validity in order to achieve applicability, will compare the internal validity and effect estimates of matched explanatory and pragmatic trials of the same intervention, condition and participants. Effect sizes for the trials will then be compared in a meta-regression. The Cochrane Risk of Bias scores will be compared with the PRECIS-2 scores of

  12. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    SciTech Connect

    Van de Velde, Joris; Audenaert, Emmanuel; Speleers, Bruno; Vercauteren, Tom; Mulliez, Thomas; Vandemaele, Pieter; Achten, Eric; Kerckaert, Ingrid; D'Herde, Katharina; De Neve, Wilfried; Van Hoof, Tom

    2013-11-15

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection.

  13. STANDARDIZATION AND VALIDATION OF MICROBIOLOGICAL METHODS FOR EXAMINATION OF BIOSOLIDS

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within a complex matrix. Implications of ...

  14. STANDARDIZATION AND VALIDATION OF MICROBIOLOGICAL METHODS FOR EXAMINATION OF BIOSOLIDS

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within a complex matrix. Implications of ...

  15. MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...

  16. Effects of Predictor Weighting Methods on Incremental Validity.

    PubMed

    Sackett, Paul R; Dahlke, Jeffrey A; Shewach, Oren R; Kuncel, Nathan R

    2017-05-22

    It is common to add an additional predictor to a selection system with the goal of increasing criterion-related validity. Research on the incremental validity of a second predictor is generally based on forming a regression-weighted composite of the predictors. However, in practice predictors are commonly used in ways other than regression-weighted composites, and we examine the robustness of incremental validity findings to other ways of using predictors, namely, unit weighting and multiple hurdles. We show that there are settings in which the incremental value of a second predictor disappears, and can even produce lower validity than the first predictor alone, when these alternatives to regression weighting are used. First, we examine conditions under which unit weighting will negate gain in predictive power attainable via regression weights. Second, we revisit Schmidt and Hunter's (1998) summary of incremental validity of predictors over cognitive ability, evaluating whether the reported incremental value of a second predictor is different when predictors are unit weighted rather than regression weighted. Third, we analyze data reported in the published literature to discern the frequency with which unit weighting might affect conclusions about whether there is value in adding a second predictor to a first. Finally, we shift from unit weighting to multiple hurdle selection, examining conditions under which conclusions about incremental validity differ when regression weighting is replaced by multiple-hurdle selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    SciTech Connect

    Ekechukwu, A

    2009-05-27

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.

  18. Use of Validation by Enterprises for Human Resource and Career Development Purposes. Cedefop Reference Series No 96

    ERIC Educational Resources Information Center

    Cedefop - European Centre for the Development of Vocational Training, 2014

    2014-01-01

    European enterprises give high priority to assessing skills and competences, seeing this as crucial for recruitment and human resource management. Based on a survey of 400 enterprises, 20 in-depth case studies and interviews with human resource experts in 10 countries, this report analyses the main purposes of competence assessment, the standards…

  19. Guidelines for the validation of qualitative multi-residue methods used to detect pesticides in food.

    PubMed

    Mol, H G J; Reynolds, S L; Fussell, R J; Stajnbaher, D

    2012-08-01

    There is a current trend for many laboratories to develop and use qualitative gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS) based multi-residue methods (MRMs) in order to greatly increase the number of pesticides that they can target. Before these qualitative MRMs can be used for the monitoring of pesticide residues in food, their fitness-for-purpose needs to be established by initial method validation. This paper sets out to assess the performances of two such qualitative MRMs against a set of parameters and criteria that might be suitable for their effective validation. As expected, the ease of detection was often dependent on the particular pesticide/commodity combinations that were targeted, especially at the lowest concentrations tested (0.01 mg/kg). The two examples also clearly demonstrated that the percentage of pesticides detected was dependent on many factors, but particularly on the capabilities of the automated software/library packages and the parameters and threshold settings selected for operation. Another very important consideration was the condition of chromatographic system and detector at the time of analysis. If the system was relatively clean, then the detection rate was much higher than if it had become contaminated over time from previous injections of sample extracts. The parameters and criteria suggested for method validation of qualitative MRMs are aimed at achieving a 95% confidence level of pesticide detection. However, the presence of any pesticide that is 'detected' will need subsequent analysis for quantification and, depending on the qualitative method used, further evidence of identity. © 2012 John Wiley & Sons, Ltd.

  20. Cleaning validation 2: development and validation of an ion chromatographic method for the detection of traces of CIP-100 detergent.

    PubMed

    Resto, Wilfredo; Hernández, Darimar; Rey, Rosamil; Colón, Héctor; Zayas, José

    2007-05-09

    A cleaning validation method, ion chromatographic method with conductivity detection was developed and validated for the determination of traces of a clean-in-place (CIP) detergent. It was shown to be linear with a squared correlation coefficient (r(2)) of 0.9999 and average recoveries of 71.4% (area response factor) from stainless steel surfaces and 101% from cotton. The repeatability was found to be 2.17% and an intermediate precision of 1.88% across the range. The method was also shown to be sensitive with a detection limit (DL) of 0.13 ppm and a quantitation limit (QL) of 0.39 ppm for EDTA, which translates to less than 1 microL of CIP diluted in 100mL of diluent in both cases. The EDTA signal was well resolved from typical ions encountered in water samples or any other interference presented from swabs and surfaces. The method could be applied to cleaning validation samples. The validated method could be included as a suitable one for rapid and reliable cleaning validation program.

  1. An evaluation of alternate production methods for Pu-238 general purpose heat source pellets

    SciTech Connect

    Mark Borland; Steve Frank

    2009-06-01

    For the past half century, the National Aeronautics and Space Administration (NASA) has used Radioisotope Thermoelectric Generators (RTG) to power deep space satellites. Fabricating heat sources for RTGs, specifically General Purpose Heat Sources (GPHSs), has remained essentially unchanged since their development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the applicable fields of chemistry, manufacturing and control systems. This paper evaluates alternative processes that could be used to produce Pu 238 fueled heat sources. Specifically, this paper discusses the production of the plutonium-oxide granules, which are the input stream to the ceramic pressing and sintering processes. Alternate chemical processes are compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product.

  2. Argentopotentiometric method of determination of chloride ions in special-purpose fluids

    SciTech Connect

    Klopov, B.N.; Makarova, N.K.

    1984-07-01

    This article proposes a new method of monitoring the chloride ion content in all special-purpose fluids (SPF) based on the reaction of chloride ion precipitation by silver nitrate and potentiometric registration of the point of equivalence. Nonaqueous solvents, such as ethyl alcohol, are used to lower the solubility of AgCl and to increase the magnitude of the potential jump. A weighed sample of the SPF is dissolved in the solvent and titrated with 0.01 N AgNO/sub 3/ solution until the potential jump is observed. The chloride ion content is calculated from the formula X=VT x 100/g, where X is the chloride ion content, V is the volume of 0.01 N AgNO/sub 3/ solution consumed in titration, T is the titer of the 0.01 N AgNO/sub 3/ solution, and g is the fluid sample weight.

  3. The reported validity and reliability of methods for evaluating continuing medical education: a systematic review.

    PubMed

    Ratanawongsa, Neda; Thomas, Patricia A; Marinopoulos, Spyridon S; Dorman, Todd; Wilson, Lisa M; Ashar, Bimal H; Magaziner, Jeffrey L; Miller, Redonda G; Prokopowicz, Gregory P; Qayyum, Rehan; Bass, Eric B

    2008-03-01

    To appraise the reported validity and reliability of evaluation methods used in high-quality trials of continuing medical education (CME). The authors conducted a systematic review (1981 to February 2006) by hand-searching key journals and searching electronic databases. Eligible articles studied CME effectiveness using randomized controlled trials or historic/concurrent comparison designs, were conducted in the United States or Canada, were written in English, and involved at least 15 physicians. Sequential double review was conducted for data abstraction, using a traditional approach to validity and reliability. Of 136 eligible articles, 47 (34.6%) reported the validity or reliability of at least one evaluation method, for a total of 62 methods; 31 methods were drawn from previous sources. The most common targeted outcome was practice behavior (21 methods). Validity was reported for 31 evaluation methods, including content (16), concurrent criterion (8), predictive criterion (1), and construct (5) validity. Reliability was reported for 44 evaluation methods, including internal consistency (20), interrater (16), intrarater (2), equivalence (4), and test-retest (5) reliability. When reported, statistical tests yielded modest evidence of validity and reliability. Translated to the contemporary classification approach, our data indicate that reporting about internal structure validity exceeded reporting about other categories of validity evidence. The evidence for CME effectiveness is limited by weaknesses in the reported validity and reliability of evaluation methods. Educators should devote more attention to the development and reporting of high-quality CME evaluation methods and to emerging guidelines for establishing the validity of CME evaluation methods.

  4. Comparison of Machine Learning Methods for the Purpose Of Human Fall Detection

    NASA Astrophysics Data System (ADS)

    Strémy, Maximilián; Peterková, Andrea

    2014-12-01

    According to several studies, the European population is rapidly aging far over last years. It is therefore important to ensure that aging population is able to live independently without the support of working-age population. In accordance with the studies, fall is the most dangerous and frequent accident in the everyday life of aging population. In our paper, we present a system to track the human fall by a visual detection, i.e. using no wearable equipment. For this purpose, we used a Kinect sensor, which provides the human body position in the Cartesian coordinates. It is possible to directly capture a human body because the Kinect sensor has a depth and also an infrared camera. The first step in our research was to detect postures and classify the fall accident. We experimented and compared the selected machine learning methods including Naive Bayes, decision trees and SVM method to compare the performance in recognizing the human postures (standing, sitting and lying). The highest classification accuracy of over 93.3% was achieved by the decision tree method.

  5. Determination of validation threshold for coordinate measuring methods using a metrological compatibility model

    NASA Astrophysics Data System (ADS)

    Gromczak, Kamila; Gąska, Adam; Kowalski, Marek; Ostrowska, Ksenia; Sładek, Jerzy; Gruza, Maciej; Gąska, Piotr

    2017-01-01

    The following paper presents a practical approach to the validation process of coordinate measuring methods at an accredited laboratory, using a statistical model of metrological compatibility. The statistical analysis of measurement results obtained using a highly accurate system was intended to determine the permissible validation threshold values. The threshold value constitutes the primary criterion for the acceptance or rejection of the validated method, and depends on both the differences between measurement results with corresponding uncertainties and the individual correlation coefficient. The article specifies and explains the types of measuring methods that were subject to validation and defines the criterion value governing their acceptance or rejection in the validation process.

  6. fMRI capture of auditory hallucinations: Validation of the two-steps method.

    PubMed

    Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud

    2017-10-01

    Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Principles and Methods to Guide Education for Purpose: A Brazilian Experience

    ERIC Educational Resources Information Center

    Araujo, Ulisses F.; Arantes, Valeria Amorim; Danza, Hanna Cebel; Pinheiro, Viviane Potenza Guimarães; Garbin, Monica

    2016-01-01

    This article presents a Brazilian experience in training teachers to educate for purpose. Understanding that purpose is a value to be constructed through real-world and contextualised experiences, the authors discuss some psychological processes that underlie purpose development. Then the authors show how these processes are used in a purpose…

  8. Principles and Methods to Guide Education for Purpose: A Brazilian Experience

    ERIC Educational Resources Information Center

    Araujo, Ulisses F.; Arantes, Valeria Amorim; Danza, Hanna Cebel; Pinheiro, Viviane Potenza Guimarães; Garbin, Monica

    2016-01-01

    This article presents a Brazilian experience in training teachers to educate for purpose. Understanding that purpose is a value to be constructed through real-world and contextualised experiences, the authors discuss some psychological processes that underlie purpose development. Then the authors show how these processes are used in a purpose…

  9. Choice of rating method for assessing occupational asbestos exposure: study for compensation purposes in France.

    PubMed

    Gramond, Celine; Rolland, Patrick; Lacourt, Aude; Ducamp, Stephane; Chamming's, Soizick; Creau, Yvon; Hery, Michel; Laureillard, Jacques; Mohammed-Brahim, Brahim; Orlowski, Ewa; Paris, Christophe; Pairon, Jean-Claude; Goldberg, Marcel; Brochard, Patrick

    2012-05-01

    In the course of setting up the National Mesothelioma Surveillance Program (PNSM), established in France in 1998, the question arose as to the most suitable method of assessing occupational exposure. The aim of this study was to define the most suitable rating method for assessing occupational asbestos exposure in order to assess medico-social care. The study included 100 subjects-50 cases of mesothelioma and 50 controls-randomly selected and representing 457 jobs held. Job asbestos exposure was assessed by a six-expert panel using two methods: "by job" rating, where all the jobs in were assessed regardless of the subjects; and "by subject" rating, where all the jobs of a subject were assessed at the same time. Consensus was obtained and subjects' exposure was calculated for each rating. Then, two internal experts assessed job asbestos exposure with the "by subject" rating. Kappa coefficients were used to measure agreement between the ratings. Agreement between "by job" and "by subject" ratings was very good for subject probability of exposure (kappa = 0.84) and cumulative exposure index (kappa = 0.80). Agreement between the six-expert panel and the two internal experts was good for subject exposure (kappa for probability = 0.71; kappa for cumulative exposure index= 0.68). This study shows that the two rating systems have good or very good agreement. These results validate the routine use in the PNSM of the "by subject" rating, with the advantage of being convenient and quick to provide feedback on occupational asbestos exposure to mesothelioma cases for compensation. Copyright © 2012 Wiley Periodicals, Inc.

  10. An image-based method to measure all-terrain vehicle dimensions for engineering safety purposes.

    PubMed

    Jennissen, Charles A; Miller, Nathan S; Tang, Kaiyang; Denning, Gerene M

    2014-04-01

    All-terrain vehicle (ATV) crashes are a serious public health and safety concern. Engineering approaches that address ATV injury prevention are critically needed. Avenues to pursue include evidence-based seat design that decreases risky behaviours, such as carrying passengers and operation of adult-size vehicles by children. The goal of this study was to create and validate an image-based method to measure ATV seat length and placement. Publicly available ATV images were downloaded. Adobe Photoshop was then used to generate a vertical grid through the centre of the vehicle, to define the grid scale using the manufacturer's reported wheelbase, and to determine seat length and placement relative to the front and rear axles using this scale. Images that yielded a difference greater than 5% between the calculated and the manufacturer's reported ATV lengths were excluded from further analysis. For the 77 images that met inclusion criteria, the mean±SD for the difference in calculated versus reported vehicle length was 1.8%±1.2%. The Pearson correlation coefficient for comparing image-based seat lengths determined by two independent measurers (20 models) and image-based lengths versus lengths measured at dealerships (12 models) were 0.95 and 0.96, respectively. The image-based method provides accurate and reproducible results for determining ATV measurements, including seat length and placement. This method greatly expands the number of ATV models that can be studied, and may be generalisable to other motor vehicle types. These measurements can be used to guide engineering approaches that improve ATV safety design.

  11. Validation of Transcriptomics-Based In Vitro Methods.

    PubMed

    Corvi, Raffaella; Vilardell, Mireia; Aubrecht, Jiri; Piersma, Aldert

    The field of transcriptomics has expanded rapidly during the last decades. This methodology provides an exceptional framework to study not only molecular changes underlying the adverse effects of a given compound, but also to understand its Mode of Action (MoA). However, the implementation of transcriptomics-based tests within the regulatory arena is not a straightforward process. One of the major obstacles in their regulatory implementation is still the interpretation of this new class of data and the judgment of the level of confidence of these tests. A key element in the regulatory acceptance of transcriptomics-based tests is validation, which still represents a major challenge. Although important advances have been made in the development and standardisation of such tests, to date there is limited experience with their validation. Taking into account the experience acquired so far, this chapter describes those aspects that were identified as important in the validation process of transcriptomics-based tests, including the assessment of standardisation, reliability and relevance. It also critically discusses the challenges posed to validation in relation to the specific characteristics of these approaches and their application in the wider context of testing strategies.

  12. Validated UPLC method for the fast and sensitive determination of steroid residues in support of cleaning validation in formulation area.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Ganzler, Katalin

    2009-04-05

    An ultra performance liquid chromatographic (UPLC) method was developed for simultaneous determination of seven steroid (dienogest, finasteride, gestodene, levonorgestrel, estradiol, ethinylestradiol, and norethisterone acetate) active pharmaceutical ingredient (API) residues. A new, generic method is presented, with which it is possible to verify the cleaning process of a steroid producing equipment line used for the production of various pharmaceuticals. The UPLC method was validated using an UPLC BEH C18 column with a particle size of 1.7 microm (50 mm x 2.1 mm) and acetonitrile-water (48:52, v/v) as mobile phase at a flow rate of 0.55 ml/min. Method development and method validation for cleaning control analysis are described. The rapid UPLC method is suitable for cleaning control assays within good manufacturing practices (GMP) of the pharmaceutical industry.

  13. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  14. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  15. Production of general purpose heat source (GPHS) using advanced manufacturing methods

    NASA Astrophysics Data System (ADS)

    Miller, Roger G.

    1996-03-01

    Mankind will continue to explore the stars through the use of unmanned space craft until the technology and costs are compatible with sending travelers to the outer planets of our solar system and beyond. Unmanned probes of the present and future will be necessary to develop the necessary technologies and obtain information that will make this travel possible. Because of the significant costs incurred, the use of modern manufacturing technologies must be used to lower the investment needed even when shared by international partnerships. For over the last 30 years, radioisotopes have provided the heat from which electrical power is extracted. Electric power for future spacecraft will be provided by either Radioisotope Thermoelectric Generators (RTG), Radioisotopic Thermophotovoltaic systems (RTPV), radioisotope Stirling systems, or a combination of these. All of these systems will be thermally driven by General Purpose Heat Source (GPHS) fueled clad in some configuration. The GPHS clad contains a 238PuO2 pellet encapsulated in an iridium alloy container. Historically, the fabrication of the iridium alloy shells has been performed at EG&G Mound and Oak Ridge National Laboratory (ORNL), and girth welding at Westinghouse Savannah River Corporation (WSRC) and Los Alamos National Laboratory (LANL). This paper will describe the use of laser processing for welding, drilling, cutting, and machining with other manufacturing methods to reduce the costs of producing GPHS fueled clad components and compléted assemblies. Incorporation of new quality technologies will compliment these manufacturing methods to reduce cost.

  16. Modified method to enhanced recovery of Toxocara cati larvae for the purposes of diagnostic and therapeutic.

    PubMed

    Zibaei, Mohammad; Uga, Shoji

    2016-10-01

    Human toxocariasis, extraintestinal-migration of Toxocara species, is a worldwide helminthic zoonosis in many places of the undeveloped countries. Toxocara cati is one of the common helminths in cats and it is a potentially preventable disease. Its diagnosis and treatment depend on the demonstration of specific excretory-secretory Toxocara antibodies from Toxocara larvae by immunological assays. This study provides a simple manual technique which can be performed in any laboratory for recovering a large number of Toxocara cati larvae from the thick-shelled eggs. The devices that are required contain a manual homogenizer and a filter membrane of 40 μm mesh; the rest of materials and solutions is standard laboratory ware. In the modified method the larval yields were 2.7 times higher (3000 larval/ml) and the time spent in performing the modified method was shorter (75 min). Further benefits over already techniques are the easy and repeatable, inexpensive and convenient materials, simplicity to perform and require less time for recovery of Toxocara cati larvae for subsequent cultivation and harvest of the larval excretory-secretory antigens for diagnostic or treatment purposes. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Cost-Benefit Considerations in Choosing among Cross-Validation Methods.

    ERIC Educational Resources Information Center

    Murphy, Kevin R.

    There are two general methods of cross-validation: empirical estimation, and formula estimation. In choosing a specific cross-validation procedure, one should consider both costs (e.g., inefficient use of available data in estimating regression parameters) and benefits (e.g., accuracy in estimating population cross-validity). Empirical…

  18. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  19. The establishment of tocopherol reference intervals for Hungarian adult population using a validated HPLC method.

    PubMed

    Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes

    2017-02-09

    Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios.

  20. Validation of a high performance liquid chromatography method for the stabilization of epigallocatechin gallate.

    PubMed

    Fangueiro, Joana F; Parra, Alexander; Silva, Amélia M; Egea, Maria A; Souto, Eliana B; Garcia, Maria L; Calpena, Ana C

    2014-11-20

    Epigallocatechin gallate (EGCG) is a green tea catechin with potential health benefits, such as anti-oxidant, anti-carcinogenic and anti-inflammatory effects. In general, EGCG is highly susceptible to degradation, therefore presenting stability problems. The present paper was focused on the study of EGCG stability in HEPES (N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid) medium regarding the pH dependency, storage temperature and in the presence of ascorbic acid a reducing agent. The evaluation of EGCG in HEPES buffer has demonstrated that this molecule is not able of maintaining its physicochemical properties and potential beneficial effects, since it is partially or completely degraded, depending on the EGCG concentration. The storage temperature of EGCG most suitable to maintain its structure was shown to be the lower values (4 or -20 °C). The pH 3.5 was able to provide greater stability than pH 7.4. However, the presence of a reducing agent (i.e., ascorbic acid) was shown to provide greater protection against degradation of EGCG. A validation method based on RP-HPLC with UV-vis detection was carried out for two media: water and a biocompatible physiological medium composed of Transcutol®P, ethanol and ascorbic acid. The quantification of EGCG for purposes, using pure EGCG, requires a validated HPLC method which could be possible to apply in pharmacokinetic and pharmacodynamics studies.

  1. Rodgers' evolutionary concept analysis--a valid method for developing knowledge in nursing science.

    PubMed

    Tofthagen, Randi; Fagerstrøm, Lisbeth M

    2010-12-01

    In nursing science, concept development is a necessary prerequisite for meaningful basic research. Rodgers' evolutionary concept analysis is a method for developing knowledge in nursing science. The purpose of this article is to present Rodgers' evolutionary concept analysis as a valid scientific method. A brief description of the evolutionary process, from data collection to data analysis, with the concepts' context, surrogate and related terms, antecedents, attributes, examples and consequences, is presented. The phases used in evolutionary concept analysis are illustrated with eight actual studies (1999-2009) from nursing research. The strength of the method is that it is systematic, with a focus on clear-cut phases during the analysis process, and that it can contribute to clarifying, describing and explaining concepts central to nursing science by analysing how a chosen concept has been used both within the discipline itself and other health sciences. While an interdisciplinary perspective which stresses the similarities and dissimilarities of how a concept is used in various disciplines can increase knowledge of a concept, it is important to clarify the specific with the discipline. Nursing research should focus on the unambiguous use of concepts, for which Rodgers' method constitutes a possible method. The importance of using quality criteria to determine the inclusion of material should, however, be emphasised in the continued development of the method.

  2. Optimal combining of ground-based sensors for the purpose of validating satellite-based rainfall estimates

    NASA Technical Reports Server (NTRS)

    Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie

    1991-01-01

    Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.

  3. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    ERIC Educational Resources Information Center

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  4. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    ERIC Educational Resources Information Center

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  5. Validation Methods for Fault-Tolerant avionics and control systems, working group meeting 1

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The proceedings of the first working group meeting on validation methods for fault tolerant computer design are presented. The state of the art in fault tolerant computer validation was examined in order to provide a framework for future discussions concerning research issues for the validation of fault tolerant avionics and flight control systems. The development of positions concerning critical aspects of the validation process are given.

  6. Application of neural networks and geomorphometry method for purposes of urban planning (Kazan, Russia)

    NASA Astrophysics Data System (ADS)

    Yermolaev, Oleg; Selivanov, Renat

    2013-04-01

    The landscape structure of a territory imposes serious limitations on the adoption of certain decisions. Differentiation of the relief into separate elementary geomorphological sections yields the basis for most adequate determination of the boundaries of urban geosystems. In paper the results of approbation of relief classification methods based on Artificial Neuron Networks are presented. Approbation of Artificial Neuron Networks (ANN) method (Kohonen's Self-Organizing Maps - SOM) for purposes of automated zoning of a modern city's territory on the example of the city of Kazan. The developed model of the restored landscapes represents the city territory as a system of geomorphologically homogenous terrains. Main research objectives: development of a digital model of relief of the city of Kazan; approbation of relief classification methods based on ANN and expert estimations; creation of a SOM-based map of urban geosystems; verification of the received results of classification, clarification and enlargement of landscape units; determination of the applicability of the method in question for purposes of zoning of big cities' territory, identification of strengths and weaknesses. First stage: analysis and digitalization of the detailed large-scale topographic map of Kazan. Digital model of the relief with a grid size of 10m has been produced. We have used this data for building various analytical maps of certain morphometric characteristics of the relief: height, slope, exposition, profile and plan curvature. Calculated morphometric values were transformed into a data matrix. Software packages use training algorithms without the use of a tutor, whereas weight coefficients are redistributed for each specific operational-territorial unit. After several iterations of the "education" process, neural network leads to gradual clumping of groups of operational-territorial unit with similar sets of morphometric parameters. 81 classes have been distinguished. Such atomism

  7. A photographic method to measure food item intake. Validation in geriatric institutions.

    PubMed

    Pouyet, Virginie; Cuvelier, Gérard; Benattar, Linda; Giboreau, Agnès

    2015-01-01

    From both a clinical and research perspective, measuring food intake is an important issue in geriatric institutions. However, weighing food in this context can be complex, particularly when the items remaining on a plate (side dish, meat or fish and sauce) need to be weighed separately following consumption. A method based on photography that involves taking photographs after a meal to determine food intake consequently seems to be a good alternative. This method enables the storage of raw data so that unhurried analyses can be performed to distinguish the food items present in the images. Therefore, the aim of this paper was to validate a photographic method to measure food intake in terms of differentiating food item intake in the context of a geriatric institution. Sixty-six elderly residents took part in this study, which was performed in four French nursing homes. Four dishes of standardized portions were offered to the residents during 16 different lunchtimes. Three non-trained assessors then independently estimated both the total and specific food item intakes of the participants using images of their plates taken after the meal (photographic method) and a reference image of one plate taken before the meal. Total food intakes were also recorded by weighing the food. To test the reliability of the photographic method, agreements between different assessors and agreements among various estimates made by the same assessor were evaluated. To test the accuracy and specificity of this method, food intake estimates for the four dishes were compared with the food intakes determined using the weighed food method. To illustrate the added value of the photographic method, food consumption differences between the dishes were explained by investigating the intakes of specific food items. Although they were not specifically trained for this purpose, the results demonstrated that the assessor estimates agreed between assessors and among various estimates made by the same

  8. 3D Microchannel Co-Culture: Method and Biological Validation

    PubMed Central

    Bauer, Maret; Su, Gui; Beebe, David J.; Friedl, Andreas

    2010-01-01

    Conventional 3D culture is typically performed in multi-well plates (e.g. 12 wells). The volumes and dimensions necessitate relatively large numbers of cells and fluid exchange steps are not easily automated limiting throughput. 3D microchannel culture can overcome these challenges simplifying 3D culture processes. However, the adaptation of immunocytochemical endpoint measurements and the validation of microchannel 3D culture with conventional 3D culture are needed before widespread adoption can occur. Here we use a breast carcinoma growth model governed by complex and reciprocal interactions between epithelial carcinoma cells and mesenchymal fibroblasts to validate the 3D microculture system. Specifically, we report the use of a 3D microchannel co-culture assay platform to interrogate paracrine signalling pathways in breast cancer. Using a previously validated 3D co-culture of human mammary fibroblasts and T47D breast carcinoma cells, we demonstrate the use of arrayed microchannels to analyze paracrine signalling pathways and screen for inhibitors. Results in both conventional format (multiwell plate) and microchannels were comparable. This technology represents a significant advancement for high-throughput screening in individual patients and for drug discovery by enabling the use of 3D co-culture models via smaller sample requirements and compatibility with existing HTS infrastructure (e.g. automated liquid handlers, scanners). PMID:20577680

  9. Validation of a modified early warning score-linked Situation-Background-Assessment-Recommendation communication tool: A mixed methods study.

    PubMed

    Burger, Debora; Jordan, Sue; Kyriacos, Una

    2017-09-01

    To develop and validate a modified Situation-Background-Assessment-Recommendation communication tool incorporating components of the Cape Town modified early warning score vital signs chart for reporting early signs of clinical deterioration. Reporting early signs of physiological and clinical deterioration could prevent "failure to rescue" or unexpected intensive care admission, cardiac arrest or death. A structured communication tool incorporating physiological and clinical parameters allows nurses to provide pertinent information about a deteriorating patient in a logical order. Mixed methods instrument development and validation. We used a sequential three-phase method: cognitive interviews, content validation and inter-rater reliability testing to validate a self-designed communication tool. Participants were purposively selected expert nurses and doctors in government sector hospitals in Cape Town. Cognitive interviews with five experts prompted most changes to the communication tool: 15/42 (35.71%) items were modified. Content validation of a revised tool was high by a predetermined ≥70% of 18 experts: 4/49 (8.2%) items were modified. Inter-rater reliability testing by two nurses indicated substantial to full agreement (Cohen's kappa .61-1) on 37/45 (82%) items. The one item achieving slight agreement (Cohen's kappa .20) indicated a difference in clinical judgement. The high overall percentage agreement (82%) suggests that the modified items are sound. Overall, 45 items remained on the validated tool. The first modified early warning score-linked Situation-Background-Assessment-Recommendation communication tool developed in South Africa was found to be valid and reliable in a local context. Nurses in South Africa can use the validated tool to provide doctors with pertinent information about a deteriorating patient in a logical order to prevent a serious adverse event. Our findings provide a reference for other African countries to develop and validate

  10. Design, development and method validation of a novel multi-resonance microwave sensor for moisture measurement.

    PubMed

    Peters, Johanna; Taute, Wolfgang; Bartscher, Kathrin; Döscher, Claas; Höft, Michael; Knöchel, Reinhard; Breitkreutz, Jörg

    2017-04-08

    Microwave sensor systems using resonance technology at a single resonance in the range of 2-3 GHz have been shown to be a rapid and reliable tool for moisture determination in solid materials including pharmaceutical granules. So far, their application is limited to lower moisture ranges or limitations above certain moisture contents had to be accepted. Aim of the present study was to develop a novel multi-resonance sensor system in order to expand the measurement range. Therefore, a novel sensor using additional resonances over a wide frequency band was designed and used to investigate inherent limitations of first generation sensor systems and material-related limits. Using granule samples with different moisture contents, an experimental protocol for calibration and validation of the method was established. Pursuant to this protocol, a multiple linear regression (MLR) prediction model built by correlating microwave moisture values to the moisture determined by Karl Fischer titration was chosen and rated using conventional criteria such as coefficient of determination (R(2)) and root mean square error of calibration (RMSEC). Using different operators, different analysis dates and different ambient conditions the method was fully validated following the guidance of ICH Q2(R1). The study clearly showed explanations for measurement uncertainties of first generation sensor systems which confirmed the approach to overcome these by using additional resonances. The established prediction model could be validated in the range of 7.6-19.6%, demonstrating its fit for its future purpose, the moisture content determination during wet granulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Convergent validity of a novel method for quantifying rowing training loads.

    PubMed

    Tran, Jacqueline; Rice, Anthony J; Main, Luana C; Gastin, Paul B

    2015-01-01

    Elite rowers complete rowing-specific and non-specific training, incorporating continuous and interval-like efforts spanning the intensity spectrum. However, established training load measures are unsuitable for use in some modes and intensities. Consequently, a new measure known as the T2minute method was created. The method quantifies load as the time spent in a range of training zones (time-in-zone), multiplied by intensity- and mode-specific weighting factors that scale the relative stress of different intensities and modes to the demands of on-water rowing. The purpose of this study was to examine the convergent validity of the T2minute method with Banister's training impulse (TRIMP), Lucia's TRIMP and Session-RPE when quantifying elite rowing training. Fourteen elite rowers (12 males, 2 females) were monitored during four weeks of routine training. Unadjusted T2minute loads (using coaches' estimates of time-in-zone) demonstrated moderate-to-strong correlations with Banister's TRIMP, Lucia's TRIMP and Session-RPE (rho: 0.58, 0.55 and 0.42, respectively). Adjusting T2minute loads by using actual time-in-zone data resulted in stronger correlations between the T2minute method and Banister's TRIMP and Lucia's TRIMP (rho: 0.85 and 0.81, respectively). The T2minute method is an appropriate in-field measure of elite rowing training loads, particularly when actual time-in-zone values are used to quantify load.

  12. 25 CFR 309.8 - For marketing purposes, what is the recommended method of identifying authentic Indian products?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 2 2013-04-01 2013-04-01 false For marketing purposes, what is the recommended method of identifying authentic Indian products? 309.8 Section 309.8 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR PROTECTION OF INDIAN ARTS AND CRAFTS PRODUCTS § 309.8 For marketing purposes, what is...

  13. 25 CFR 309.8 - For marketing purposes, what is the recommended method of identifying authentic Indian products?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false For marketing purposes, what is the recommended method of identifying authentic Indian products? 309.8 Section 309.8 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR PROTECTION OF INDIAN ARTS AND CRAFTS PRODUCTS § 309.8 For marketing purposes, what is...

  14. 25 CFR 309.8 - For marketing purposes, what is the recommended method of identifying authentic Indian products?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false For marketing purposes, what is the recommended method of identifying authentic Indian products? 309.8 Section 309.8 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR PROTECTION OF INDIAN ARTS AND CRAFTS PRODUCTS § 309.8 For marketing purposes, what is...

  15. 25 CFR 309.8 - For marketing purposes, what is the recommended method of identifying authentic Indian products?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 2 2011-04-01 2011-04-01 false For marketing purposes, what is the recommended method of identifying authentic Indian products? 309.8 Section 309.8 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR PROTECTION OF INDIAN ARTS AND CRAFTS PRODUCTS § 309.8 For marketing purposes, what is...

  16. 25 CFR 309.8 - For marketing purposes, what is the recommended method of identifying authentic Indian products?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 2 2012-04-01 2012-04-01 false For marketing purposes, what is the recommended method of identifying authentic Indian products? 309.8 Section 309.8 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR PROTECTION OF INDIAN ARTS AND CRAFTS PRODUCTS § 309.8 For marketing purposes, what is...

  17. Development and validation of simple titrimetric method for the determination of magnesium content in esomeprazole magnesium.

    PubMed

    Haddadin, R N; Issa, A Y

    2011-07-01

    A simple and inexpensive titrimetric method for the determination of magnesium ion in esomeprazole magnesium raw material was developed and validated according to International Conference on Harmonization guidelines and the United States Pharmacopoeia. The method depends on complex formation between EDTA and magnesium ion. The method was proven to be valid, equivalent and useful as an alternative method to the current pharmacopeial methods that are based on atomic absorption spectrometry.

  18. Development and Validation of Simple Titrimetric Method for the Determination of Magnesium Content in Esomeprazole Magnesium

    PubMed Central

    Haddadin, R. N.; Issa, A. Y.

    2011-01-01

    A simple and inexpensive titrimetric method for the determination of magnesium ion in esomeprazole magnesium raw material was developed and validated according to International Conference on Harmonization guidelines and the United States Pharmacopoeia. The method depends on complex formation between EDTA and magnesium ion. The method was proven to be valid, equivalent and useful as an alternative method to the current pharmacopeial methods that are based on atomic absorption spectrometry. PMID:22707837

  19. Essential validation methods for E. coli strains created by chromosome engineering.

    PubMed

    Tiruvadi Krishnan, Sriram; Moolman, M Charl; van Laar, Theo; Meyer, Anne S; Dekker, Nynke H

    2015-01-01

    Chromosome engineering encompasses a collection of homologous recombination-based techniques that are employed to modify the genome of a model organism in a controlled fashion. Such techniques are widely used in both fundamental and industrial research to introduce multiple insertions in the same Escherichia coli strain. To date, λ-Red recombination (also known as recombineering) and P1 phage transduction are the most successfully implemented chromosome engineering techniques in E. coli. However, due to errors that can occur during the strain creation process, reliable validation methods are essential upon alteration of a strain's chromosome. Polymerase chain reaction (PCR)-based methods and DNA sequence analysis are rapid and powerful methods to verify successful integration of DNA sequences into a chromosome. Even though these verification methods are necessary, they may not be sufficient in detecting all errors, imposing the requirement of additional validation methods. For example, as extraneous insertions may occur during recombineering, we highlight the use of Southern blotting to detect their presence. These unwanted mutations can be removed via transducing the region of interest into the wild type chromosome using P1 phages. However, in doing so one must verify that both the P1 lysate and the strains utilized are free from contamination with temperate phages, as these can lysogenize inside a cell as a large plasmid. Thus, we illustrate various methods to probe for temperate phage contamination, including cross-streak agar and Evans Blue-Uranine (EBU) plate assays, whereby the latter is a newly reported technique for this purpose in E. coli. Lastly, we discuss methodologies for detecting defects in cell growth and shape characteristics, which should be employed as an additional check. The simple, yet crucial validation techniques discussed here can be used to reliably verify any chromosomally engineered E. coli strains for errors such as non

  20. Validation and Continued Development of Methods for Spheromak Simulation

    NASA Astrophysics Data System (ADS)

    Benedett, Thomas

    2016-10-01

    The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.

  1. FIELD VALIDATION OF SEDIMENT TOXCITY IDENTIFCATION AND EVALUATION METHODS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  2. Voltammetric determination of copper in selected pharmaceutical preparations--validation of the method.

    PubMed

    Lutka, Anna; Maruszewska, Małgorzata

    2011-01-01

    It were established and validated the conditions of voltammetric determination of copper in pharmaceutical preparations. The three selected preparations: Zincuprim (A), Wapń, cynk, miedź z wit. C (B), Vigor complete (V) contained different salts and different quantity of copper (II) and increasing number of accompanied ingredients. For the purpose to transfer copper into solution, the samples of powdered tablets of the first and second preparation were undergone extraction and of the third the mineralization procedures. The concentration of copper in solution was determined by differential pulse voltammetry (DP) using comparison with standard technique. In the validation process, the selectivity, accuracy, precision and linearity of DP determination of copper in three preparations were estimated. Copper was determined within the concentration range of 1-9 ppm (1-9 microg/mL): the mean recoveries approached 102% (A), 100% (B), 102% (V); the relative standard deviations of determinations (RSD) were 0.79-1.59% (A), 0.62-0.85% (B) and 1.68-2.28% (V), respectively. The mean recoveries and the RSDs of determination satisfied the requirements for the analyte concentration at the level 1-10 ppm. The statistical verification confirmed that the tested voltammetric method is suitable for determination of copper in pharmaceutical preparation.

  3. Development and Validation of an HPLC Method for the Analysis of Sirolimus in Drug Products

    PubMed Central

    Islambulchilar, Ziba; Ghanbarzadeh, Saeed; Emami, Shahram; Valizadeh, Hadi; Zakeri-Milani, Parvin

    2012-01-01

    Purpose: The aim of this study was to develop a simple, rapid and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method for quantification of sirolimus (SRL) in pharmaceutical dosage forms. Methods: The chromatographic system employs isocratic elution using a Knauer- C18, 5 mm, 4.6 × 150 mm. Mobile phase consisting of acetonitril and ammonium acetate buffer set at flow rate 1.5 ml/min. The analyte was detected and quantified at 278nm using ultraviolet detector. The method was validated as per ICH guidelines. Results: The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 125–2000ng/ml. For all quality control (QC) standards in intraday and interday assay, accuracy and precision range were -0.96 to 6.30 and 0.86 to 13.74 respectively, demonstrating the precision and accuracy over the analytical range. Samples were stable during preparation and analysis procedure. Conclusion: Therefore the rapid and sensitive developed method can be used for the routine analysis of sirolimus such as dissolution and stability assays of pre- and post-marketed dosage forms. PMID:24312784

  4. Polymorphism in nimodipine raw materials: development and validation of a quantitative method through differential scanning calorimetry.

    PubMed

    Riekes, Manoela Klüppel; Pereira, Rafael Nicolay; Rauber, Gabriela Schneider; Cuffini, Silvia Lucia; de Campos, Carlos Eduardo Maduro; Silva, Marcos Antonio Segatto; Stulzer, Hellen Karine

    2012-11-01

    Due to the physical-chemical and therapeutic impacts of polymorphism, its monitoring in raw materials is necessary. The purpose of this study was to develop and validate a quantitative method to determine the polymorphic content of nimodipine (NMP) raw materials based on differential scanning calorimetry (DSC). The polymorphs required for the development of the method were characterized through DSC, X-ray powder diffraction (XRPD) and Raman spectroscopy and their polymorphic identity was confirmed. The developed method was found to be linear, robust, precise, accurate and specific. Three different samples obtained from distinct suppliers (NMP 1, NMP 2 and NMP 3) were firstly characterized through XRPD and DSC as polymorphic mixtures. The determination of their polymorphic identity revealed that all samples presented the Modification I (Mod I) or metastable form in greatest proportion. Since the commercial polymorph is Mod I, the polymorphic characteristic of the samples analyzed needs to be investigated. Thus, the proposed method provides a useful tool for the monitoring of the polymorphic content of NMP raw materials.

  5. Sample size considerations of prediction-validation methods in high-dimensional data for survival outcomes.

    PubMed

    Pang, Herbert; Jung, Sin-Ho

    2013-04-01

    A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes.

  6. Sample Size Considerations of Prediction-Validation Methods in High-Dimensional Data for Survival Outcomes

    PubMed Central

    Pang, Herbert; Jung, Sin-Ho

    2013-01-01

    A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes. PMID:23471879

  7. ECVAM's approach to intellectual property rights in the validation of alternative methods.

    PubMed

    Linge, Jens P; Hartung, Thomas

    2007-08-01

    In this article, we discuss how intellectual property rights affect the validation of alternative methods at ECVAM. We point out recent cases and summarise relevant EU and OECD documents. Finally, we discuss guidelines for dealing with intellectual property rights during the validation of alternative methods at ECVAM.

  8. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    ERIC Educational Resources Information Center

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  9. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  10. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  11. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...

  12. A Model Incorporating the Rationale and Purpose for Conducting Mixed-Methods Research in Special Education and beyond

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Sutton, Ida L.

    2006-01-01

    This article provides a typology of reasons for conducting mixed-methods research in special education. The mixed-methods research process is described along with the role of the rationale and purpose of study. The reasons given in the literature for utilizing mixed-methods research are explicated, and the limitations of these reason frameworks…

  13. A Model Incorporating the Rationale and Purpose for Conducting Mixed-Methods Research in Special Education and beyond

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Sutton, Ida L.

    2006-01-01

    This article provides a typology of reasons for conducting mixed-methods research in special education. The mixed-methods research process is described along with the role of the rationale and purpose of study. The reasons given in the literature for utilizing mixed-methods research are explicated, and the limitations of these reason frameworks…

  14. General purpose nonlinear system solver based on Newton-Krylov method.

    SciTech Connect

    2013-12-01

    KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].

  15. Validation of two ribosomal RNA removal methods for microbial metatranscriptomics

    SciTech Connect

    He, Shaomei; Wurtzel, Omri; Singh, Kanwar; Froula, Jeff L; Yilmaz, Suzan; Tringe, Susannah G; Wang, Zhong; Chen, Feng; Lindquist, Erika A; Sorek, Rotem; Hugenholtz, Philip

    2010-10-01

    The predominance of rRNAs in the transcriptome is a major technical challenge in sequence-based analysis of cDNAs from microbial isolates and communities. Several approaches have been applied to deplete rRNAs from (meta)transcriptomes, but no systematic investigation of potential biases introduced by any of these approaches has been reported. Here we validated the effectiveness and fidelity of the two most commonly used approaches, subtractive hybridization and exonuclease digestion, as well as combinations of these treatments, on two synthetic five-microorganism metatranscriptomes using massively parallel sequencing. We found that the effectiveness of rRNA removal was a function of community composition and RNA integrity for these treatments. Subtractive hybridization alone introduced the least bias in relative transcript abundance, whereas exonuclease and in particular combined treatments greatly compromised mRNA abundance fidelity. Illumina sequencing itself also can compromise quantitative data analysis by introducing a G+C bias between runs.

  16. AOAC International methods committee guidelines for validation of qualitative and quantitative food microbiological official methods of analysis.

    PubMed

    Feldsine, Philip; Abeyta, Carlos; Andrews, Wallace H

    2002-01-01

    Responding to a need for a guide for conducting Official Method validation studies of microbiological methods, AOAC utilized the experience of three microbiologists who have been active in the field of method validation. In collaboration, a document was prepared which covered the following areas: terms and their definitions associated with the Official Methods program (e.g., reference methods, alternative methods, and ruggedness testing), protocols and validation requirements for qualitative methods versus those for quantitative methods, the concept of the precollaborative study, ruggedness testing, tests for significant differences, performance indicators, and the approval process. After its preparation, this document was reviewed by the members of the Methods Committee on Microbiology and Extraneous Materials and by members of the Official Methods Board. Herein is presented the approved version of that document.

  17. How valid and applicable are current diagnostic criteria and assessment methods for dentin hypersensitivity? An overview.

    PubMed

    Gernhardt, Christian R

    2013-03-01

    Although dentin hypersensitivity is a common clinical condition and is generally reported by the patient after experiencing a sharp, short pain caused by one of several different external stimuli, it is often inadequately understood. The purpose of this paper is to discuss different available diagnostic approaches and assessment methods used in order to suggest a basis to diagnose, monitor, and measure these challenging painful conditions related to dentin hypersensitivity in daily practice and scientific projects properly. A PubMed literature search strategy including the following MeSH terms were used as follows: "dentin sensitivity"[MeSH Terms] OR "dentin"[All Fields] AND "sensitivity"[All Fields] OR "dentin sensitivity"[All Fields] OR "dentin"[All Fields] AND "hypersensitivity"[All Fields] OR "dentin hypersensitivity"[All Fields] AND "diagnosis"[Subheading] OR "diagnosis"[All Fields] OR "diagnosis"[MeSH Terms] AND "assessment"[All Fields] AND ("methods"[Subheading] OR "methods"[All Fields] OR "methods"[MeSH Terms]. Furthermore, alternative terms such as "validity," "reliability," "root," "cervical," "diagnostic criteria," and "hypersensitivities" were additionally evaluated. The literature search, also including the alternative terms and journals, revealed only a small number of specific papers related to valid diagnosis, diagnostic criteria, and assessment methods of dentin hypersensitivity. Outcomes from these publications showed that the response to different stimuli varies substantially from one person to another and is, due to individual factors, often difficult to assess correctly. Furthermore, the cause of the reported pain can vary, and the patient's description of the history, symptoms, and discomfort might be different from one to another, not allowing a reliable and valid diagnosis. The dental practitioner, using a variety of diagnostic and measurement techniques each day, will often have difficulties in differentiating dentin hypersensitivity from

  18. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  19. STATISTICAL VALIDATION OF SULFATE QUANTIFICATION METHODS USED FOR ANALYSIS OF ACID MINE DRAINAGE

    EPA Science Inventory

    Turbidimetric method (TM), ion chromatography (IC) and inductively coupled plasma atomic emission spectrometry (ICP-AES) with and without acid digestion have been compared and validated for the determination of sulfate in mining wastewater. Analytical methods were chosen to compa...

  20. A Systematic Review of Validated Methods for Identifying Cerebrovascular Accident or Transient Ischemic Attack Using Administrative Data

    PubMed Central

    Andrade, Susan E.; Harrold, Leslie R.; Tjia, Jennifer; Cutrona, Sarah L.; Saczynski, Jane S.; Dodd, Katherine S.; Goldberg, Robert J.; Gurwitz, Jerry H.

    2012-01-01

    Purpose To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data. Methods PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria. Results A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater. Conclusions The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest. PMID:22262598

  1. Testing and Validation of the Dynamic Interia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander; Herrera, Claudia; Spivey, Natalie; Fladung, William; Cloutier, David

    2015-01-01

    This presentation describes the DIM method and how it measures the inertia properties of an object by analyzing the frequency response functions measured during a ground vibration test (GVT). The DIM method has been in development at the University of Cincinnati and has shown success on a variety of small scale test articles. The NASA AFRC version was modified for larger applications.

  2. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2016-04-26

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions.

  3. Guidelines for the verification and validation of expert system software and conventional software. Volume 3: Survey and documentation of expert system verification and validation methods. Final report

    SciTech Connect

    Groundwater, E.H.; Miller, L.A.; Mirsky, S.M.

    1995-05-01

    This report is the third volume in the final report for the Expert System Verification and Validation (V&V) project which was jointly sponsored by the Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. The purpose of this activity was to survey and document techniques presently in use for expert system V&V. The survey effort included an extensive telephone interviewing program, site visits, and a thorough bibliographic search and compilation. The major finding was that V&V of expert systems is not nearly as established or prevalent as V&V of conventional software systems. When V&V was used for expert systems, it was almost always at the system validation stage after full implementation and integration usually employing the non-systematic dynamic method of {open_quotes}ad hoc testing.{close_quotes} There were few examples of employing V&V in the early phases of development and only weak sporadic mention of the possibilities in the literature. There is, however, a very active research area concerning the development of methods and tools to detect problems with, particularly, rule-based expert systems. Four such static-testing methods were identified which were not discovered in a comprehensive review of conventional V&V methods in an earlier task.

  4. Validation of a generic quantitative (1)H NMR method for natural products analysis.

    PubMed

    Gödecke, Tanja; Napolitano, José G; Rodríguez-Brasco, María F; Chen, Shao-Nong; Jaki, Birgit U; Lankin, David C; Pauli, Guido F

    2013-01-01

    Nuclear magnetic resonance (NMR) spectroscopy is increasingly employed in the quantitative analysis and quality control (QC) of natural products (NP) including botanical dietary supplements (BDS). The establishment of QC protocols based on quantitative (1) H NMR (qHNMR) requires method validation. Develop and validate a generic qHNMR method. Optimize acquisition and processing parameters, with specific attention to the requirements for the analysis of complex NP samples, including botanicals and purity assessment of NP isolates. In order to establish the validated qHNMR method, samples containing two highly pure reference materials were used. The influence of acquisition and processing parameters on the method validation was examined, and general aspects of method validation of qHNMR methods discussed. Subsequently, the method established was applied to the analysis of two NP samples: a purified reference compound and a crude mixture. The accuracy and precision of qHNMR using internal or external calibration were compared, using a validated method suitable for complex samples. The impact of post-acquisition processing on method validation was examined using three software packages: TopSpin, Mnova and NUTS. The dynamic range of the qHNMR method developed was 5000:1 with a limit of detection (LOD) of better than 10 µm. The limit of quantification (LOQ) depends on the desired level of accuracy and experiment time spent. This study revealed that acquisition parameters, processing parameters and processing software all contribute to qHNMR method validation. A validated method with a high dynamic range and general workflow for qHNMR analysis of NP is proposed. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Cortisol and cortisone ratio in urine: LC-MS/MS method validation and preliminary clinical application.

    PubMed

    Antonelli, Giorgia; Artusi, Carlo; Marinova, Mariela; Brugnolo, Laura; Zaninotto, Martina; Scaroni, Carla; Gatti, Rosalba; Mantero, Franco; Plebani, Mario

    2014-02-01

    The determination of urinary cortisol/cortisone ratio is of clinical utility in cases of Cushing's syndrome, apparent mineralocorticoid excess, and also provides information on 11β-hydroxysteroid dehydrogenase (11β-HSD) type 2 activity. It is therefore of utmost importance to ensure accurate cortisol and cortisone measurement and establish appropriate reference ranges. After the isotopic dilution of urine, sample cleanups were obtained with on-line solid-phase extraction and cortisol and cortisone, separated using a Zorbax Eclipse XDB-C18 HPLC analytical column, were analyzed by tandem mass spectrometry with an electrospray ionization source in positive ion mode operation. The method was linear, with concentrations of up to 625 and 1125 nmol/L and lower limit of quantitation (LLOQ) of 5 and 6 nmol/L, for cortisol and cortisone, respectively. Within-run and between-run coefficients of variation were <5% and 6% for cortisol and 6% and 8% for cortisone, respectively. No ion suppression was observed. The non-parametric reference range for the cortisol/cortisone ratio was 0.14-1.09. A simple and sensitive liquid chromatography tandem mass spectrometry method was developed and validated for the measurement of cortisol and cortisone in urine. Our findings indicate that the proposed analytical method is suitable for routine purposes and useful in many pathological conditions.

  6. Flight critical system design guidelines and validation methods

    NASA Technical Reports Server (NTRS)

    Holt, H. M.; Lupton, A. O.; Holden, D. G.

    1984-01-01

    Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.

  7. Flight critical system design guidelines and validation methods

    NASA Technical Reports Server (NTRS)

    Holt, H. M.; Lupton, A. O.; Holden, D. G.

    1984-01-01

    Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.

  8. [Genetic tests: definition, methods, validity and clinical utility].

    PubMed

    Lagos L, Marcela; Poggi M, Helena

    2010-01-01

    The knowledge of the human genome has led to an explosion of available genetic tests for clinical use. The methodologies used in these tests vary widely, allowing the study from chromosomes to the analysis of a single nucleotide. Prior to its use in the clinical setting, these tests should have an evaluation that includes analytical and clinical validation and determination of the clinical utility, as any other tests, including requirements for quality assurance. Recently, the CDC (Centers for Disease Control and Prevention, USA) published a guideline for Good Laboratory Practices for Molecular Genetic Testing for Heritable Diseases and Conditions, covering the pre-analytical, analytical and post-analytical phases of the tests. The document covers the importance of proper selection of tests, the availability of information on the performance of the techniques used, the quality control practices, the training of personnel involved and the report of results, to allow the adequate interpretation, including sensitivity and specificity. Considering that recent advances in genetics have changed and will continue to affect clinical practice, genetic tests must meet quality and safety requirements to enable optimal use of them.

  9. Validation of a laboratory method of measuring postpartum blood loss.

    PubMed

    Chua, S; Ho, L M; Vanaja, K; Nordstrom, L; Roy, A C; Arulkumaran, S

    1998-01-01

    Laboratory methods give more accurate measurement of blood loss in the postpartum period than visual estimation. In order to evaluate a laboratory method used to quantify blood loss postpartum, blood lost at gynecological operations was collected in a measuring bottle. The measured amount of blood (50-1,000 ml) was then poured onto absorbent paper towels and sanitary pads, in order to mimic conditions when measuring blood loss in clinical trials in the postpartum period. The amount of blood absorbed onto the absorbent paper and sanitary pads was measured by a rapid method of automatic extraction and photometric measurement of alkaline hematin. The study shows that the method provides a reliable and accurate means of measuring blood loss. The error in each case was less than 10% with an intraclass correlation coefficient of almost 1.

  10. State of the art in the validation of screening methods for the control of antibiotic residues: is there a need for further development?

    PubMed

    Gaudin, Valérie

    2017-09-01

    Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.

  11. Analytical and clinical validation of the new Abbot Architect 25(OH)D assay: fit for purpose?

    PubMed

    Cavalier, Etienne; Lukas, Pierre; Bekaert, Anne-Catherine; Carlisi, Agnès; Le Goff, Caroline; Delanaye, Pierre; Souberbielle, Jean-Claude

    2017-03-01

    We provide a clinical and analytical evaluation of the reformulated version of the Abbott Architect 25-hydroxyvitamin D assay. We compared this assay with three commercial automated immunoassays and against a VDSP-traceable liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) in six different populations. We also supplemented 40 healthy volunteers with either 600,000 IU of vitamin D2 or 100,000 of vitamin D3 to evaluate the performance of the immunoassays vs. the LC-MS/MS. Precision and limit of quantification were assessed, 25(OH)D2 and C3-epimer recovery were calculated. Two hundred and forty samples obtained in healthy Caucasians and Africans, osteoporotic, hemodialyzed and intensive care patients and 3rd trimester pregnant women were analyzed by all methods. Correlation was studied using Passing-Bablok and Bland-Altman analysis. Concordance correlation coefficient (CCC) was calculated to evaluate agreement between immunoassays and LC-MS/MS. We verified if patients were homogeneously classified with the immunoassays when they took vitamin D2 or vitamin D3 after 1, 7 and 28 days. We observed excellent analytical features and showed a very good correlation to the LC-MS/MS results in the overall population. Compared to the other immunoassays, concordance of the new Abbott assay with the LC-MS/MS was at least similar, and often better in diseased populations. Althought the cross-reactivity with 25(OH)D2 was not of 100%, there was no significant difference in the classifications of the patients, either supplemented with D2 or D3 or after 7 or 28 days. This modified version of the Abbott Architect assay is clearly improved compared to the previous one and presents a better agreement with the LC-MS/MS.

  12. Validated spectrofluorimetric method for determination of selected aminoglycosides

    NASA Astrophysics Data System (ADS)

    Omar, Mahmoud A.; Ahmed, Hytham M.; Hammad, Mohamed A.; Derayea, Sayed M.

    2015-01-01

    New, sensitive, and selective spectrofluorimetric method was developed for determination of three aminoglycoside drugs in different dosage forms, namely; neomycin sulfate (NEO), tobramycin (TOB) and kanamycin sulfate (KAN). The method is based on Hantzsch condensation reaction between the primary amino group of aminoglycosides with acetylacetone and formaldehyde in pH 2.7 yielding highly yellow fluorescent derivatives measured emission (471 nm) and excitation (410 nm) wavelengths. The fluorescence intensity was directly proportional to the concentration over the range 10-60, 40-100 and 5-50 ng/mL for NEO, TOB and KAN respectively. The proposed method was applied successfully for determination of these drugs in their pharmaceutical dosage forms.

  13. Validation of a Numerical Method for Determining Liner Impedance

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1996-01-01

    This paper reports the initial results of a test series to evaluate a method for determining the normal incidence impedance of a locally reacting acoustically absorbing liner, located on the lower wall of a duct in a grazing incidence, multi-modal, non-progressive acoustic wave environment without flow. This initial evaluation is accomplished by testing the methods' ability to converge to the known normal incidence impedance of a solid steel plate, and to the normal incidence impedance of an absorbing test specimen whose impedance was measured in a conventional normal incidence tube. The method is shown to converge to the normal incident impedance values and thus to be an adequate tool for determining the impedance of specimens in a grazing incidence, multi-modal, nonprogressive acoustic wave environment for a broad range of source frequencies.

  14. Validation of ESR analyzer using Westergren ESR method.

    PubMed

    Sikka, Meera; Tandon, Rajesh; Rusia, Usha; Madan, Nishi

    2007-07-01

    Erythrocyte sedimentation rate (ESR) is one of the most frequently ordered laboratory test. ESR analyzers were developed to provide a quick and efficient measure of ESR. We compared the results of ESR obtained by an ESR analyzer with those by the Westergren method in a group of 75 patients Linear regression analysis showed a good correlation between the two results (r = 0.818, p < 0.01). The intra class correlation was 0.82. The analyzer method had the advantages of safety, decreased technician time and improved patient care by providing quick results.

  15. School Discipline: Have We Lost Our Sense of Purpose in Our Search for a Good Method?

    ERIC Educational Resources Information Center

    Burton, Mary Alice Blanford

    The general economic and psychological evolution in America from a producer society to a consumer society has resulted in a conflict of purposes for American educators regarding school discipline. Consequently, contemporary American educators, unlike their forerunners, have ignored the long term social goals of classroom discipline. They have,…

  16. Maladjustment of Bully-Victims: Validation with Three Identification Methods

    ERIC Educational Resources Information Center

    Yang, An; Li, Xiang; Salmivalli, Christina

    2016-01-01

    Although knowledge on the psychosocial (mal)adjustment of bully-victims, children who bully others and are victimised by others, has been increasing, the findings have been principally gained utilising a single method to identify bully-victims. The present study examined the psychosocial adjustment of bully-victims (as compared with pure bullies…

  17. Maladjustment of Bully-Victims: Validation with Three Identification Methods

    ERIC Educational Resources Information Center

    Yang, An; Li, Xiang; Salmivalli, Christina

    2016-01-01

    Although knowledge on the psychosocial (mal)adjustment of bully-victims, children who bully others and are victimised by others, has been increasing, the findings have been principally gained utilising a single method to identify bully-victims. The present study examined the psychosocial adjustment of bully-victims (as compared with pure bullies…

  18. The Language Teaching Methods Scale: Reliability and Validity Studies

    ERIC Educational Resources Information Center

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  19. Obtaining Valid Response Rates: Considerations beyond the Tailored Design Method.

    ERIC Educational Resources Information Center

    Huang, Judy Y.; Hubbard, Susan M.; Mulvey, Kevin P.

    2003-01-01

    Reports on the use of the tailored design method (TDM) to achieve high survey response in two separate studies of the dissemination of Treatment Improvement Protocols (TIPs). Findings from these two studies identify six factors may have influenced nonresponse, and show that use of TDM does not, in itself, guarantee a high response rate. (SLD)

  20. Establishing Survey Validity and Reliability for American Indians Through “Think Aloud” and Test–Retest Methods

    PubMed Central

    Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L.; Burgess, Katherine M.; Puumala, Susan E.; Wilton, Georgiana; Hanson, Jessica D.

    2015-01-01

    The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a “think aloud” methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test–retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test–retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. PMID:25888693

  1. Use of multiple cluster analysis methods to explore the validity of a community outcomes concept map.

    PubMed

    Orsi, Rebecca

    2017-02-01

    Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research.

  2. Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.

    PubMed

    Morgeaux, S; Variot, P; Daas, A; Costanzo, A

    2013-01-01

    The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).

  3. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  4. Cost-Benefit Considerations in Choosing among Cross-Validation Methods.

    ERIC Educational Resources Information Center

    Murphy, Kevin R.

    1984-01-01

    Outlines costs and benefits associated with different cross-validation strategies; in particular the way in which the study design affects the cost and benefits of different types of cross-validation. Suggests that the choice between empirical estimation methods and formula estimates involves a trade-off between accuracy and simplicity. (JAC)

  5. A Validation of Elements, Methods, and Barriers to Inclusive High School Service-Learning Programs

    ERIC Educational Resources Information Center

    Dymond, Stacy K.; Chun, Eul Jung; Kim, Rah Kyung; Renzaglia, Adelle

    2013-01-01

    A statewide survey of coordinators of inclusive high school service-learning programs was conducted to validate elements, methods, and barriers to including students with and without disabilities in service-learning. Surveys were mailed to 655 service-learning coordinators; 190 (29%) returned a completed survey. Findings support the validity of…

  6. Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study

    ERIC Educational Resources Information Center

    Ogilvie, Emily; McCrudden, Matthew T.

    2017-01-01

    An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…

  7. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    ERIC Educational Resources Information Center

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  8. A Validation of Elements, Methods, and Barriers to Inclusive High School Service-Learning Programs

    ERIC Educational Resources Information Center

    Dymond, Stacy K.; Chun, Eul Jung; Kim, Rah Kyung; Renzaglia, Adelle

    2013-01-01

    A statewide survey of coordinators of inclusive high school service-learning programs was conducted to validate elements, methods, and barriers to including students with and without disabilities in service-learning. Surveys were mailed to 655 service-learning coordinators; 190 (29%) returned a completed survey. Findings support the validity of…

  9. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    ERIC Educational Resources Information Center

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  10. Validity of a digital diet estimation method for use with preschool children

    USDA-ARS?s Scientific Manuscript database

    The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...

  11. Assessment and validation of a simple automated method for the detection of gait events and intervals.

    PubMed

    Ghoussayni, Salim; Stevens, Christopher; Durham, Sally; Ewins, David

    2004-12-01

    A simple and rapid automatic method for detection of gait events at the foot could speed up and possibly increase the repeatability of gait analysis and evaluations of treatments for pathological gaits. The aim of this study was to compare and validate a kinematic-based algorithm used in the detection of four gait events, heel contact, heel rise, toe contact and toe off. Force platform data is often used to obtain start and end of contact phases, but not usually heel rise and toe contact events. For this purpose synchronised kinematic, kinetic and video data were captured from 12 healthy adult subjects walking both barefoot and shod at slow and normal self-selected speeds. The data were used to determine the gait events using three methods: force, visual inspection and algorithm methods. Ninety percent of all timings given by the algorithm were within one frame (16.7 ms) when compared to visual inspection. There were no statistically significant differences between the visual and algorithm timings. For both heel and toe contact the differences between the three methods were within 1.5 frames, whereas for heel rise and toe off the differences between the force on one side and the visual and algorithm on the other were higher and more varied (up to 175 ms). In addition, the algorithm method provided the duration of three intervals, heel contact to toe contact, toe contact to heel rise and heel rise to toe off, which are not readily available from force platform data. The ability to automatically and reliably detect the timings of these four gait events and three intervals using kinematic data alone is an asset to clinical gait analysis.

  12. Stress degradation studies on betahistine and development of a validated stability-indicating assay method.

    PubMed

    Khedr, Alaa; Sheha, Mahmoud

    2008-06-15

    The purpose of this work was to study the stability of betahistine (BET) at different stress conditions and to develop a sensitive stability-indicating high-performance liquid chromatographic (HPLC) assay method. The stress conditions applied were including the effect of heat, moisture, acid-base, and ultra-violet (UV) light. Betahistine and its decomposition products were derivatized by reaction with dansyl chloride (Dan-Cl) and analyzed by HPLC equipped with fluorescence detector (FL) set at 336 and 531 nm as excitation and emission wavelengths, respectively. The drug was particularly labile at UV light and oxygen rich media. Two potential degradation products could be separated and identified by spectral methods. The chromatographic method involved Zorbax Eclipse XDB-C(18) column kept at 30+/-2 degrees C and a gradient elution with mobile phase composed of acetonitrile and 0.02 mol L(-1) sodium acetate. The response factor of dansylated BET monitored by fluorescence detection was 32 times more than its UV response. The calibration curve of BET in bulk form was linear from 0.005 to 4.2 ng microL(-1). Intraday and interday precision were less than 0.04% (CV), and accuracy was between 99.2% and 100.9% over 2.0 ng microL(-1). The limit of detection was 0.002 ng microL(-1). The method was also validated for sample stability during reaction, robustness and selectivity. The method was applied for purity testing of betahistine in tablet form.

  13. System identification methods for aircraft flight control development and validation

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.

    1995-01-01

    System-identification methods compose a mathematical model, or series of models, from measurements of inputs and outputs of dynamic systems. The extracted models allow the characterization of the response of the overall aircraft or component subsystem behavior, such as actuators and on-board signal processing algorithms. This paper discusses the use of frequency-domain system-identification methods for the development and integration of aircraft flight-control systems. The extraction and analysis of models of varying complexity from nonparametric frequency-responses to transfer-functions and high-order state-space representations is illustrated using the Comprehensive Identification from FrEquency Responses (CIFER) system-identification facility. Results are presented for test data of numerous flight and simulation programs at the Ames Research Center including rotorcraft, fixed-wing aircraft, advanced short takeoff and vertical landing (ASTOVL), vertical/short takeoff and landing (V/STOL), tiltrotor aircraft, and rotor experiments in the wind tunnel. Excellent system characterization and dynamic response prediction is achieved for this wide class of systems. Examples illustrate the role of system-identification technology in providing an integrated flow of dynamic response data around the entire life-cycle of aircraft development from initial specifications, through simulation and bench testing, and into flight-test optimization.

  14. Measuring Under-Five Mortality: Validation of New Low-Cost Methods

    PubMed Central

    Rajaratnam, Julie Knoll; Tran, Linda N.; Lopez, Alan D.; Murray, Christopher J. L.

    2010-01-01

    Background There has been increasing interest in measuring under-five mortality as a health indicator and as a critical measure of human development. In countries with complete vital registration systems that capture all births and deaths, under-five mortality can be directly calculated. In the absence of a complete vital registration system, however, child mortality must be estimated using surveys that ask women to report the births and deaths of their children. Two survey methods exist for capturing this information: summary birth histories and complete birth histories. A summary birth history requires a minimum of only two questions: how many live births has each mother had and how many of them have survived. Indirect methods are then applied using the information from these two questions and the age of the mother to estimate under-five mortality going back in time prior to the survey. Estimates generated from complete birth histories are viewed as the most accurate when surveys are required to estimate under-five mortality, especially for the most recent time periods. However, it is much more costly and labor intensive to collect these detailed data, especially for the purpose of generating small area estimates. As a result, there is a demand for improvement of the methods employing summary birth history data to produce more accurate as well as subnational estimates of child mortality. Methods and Findings We used data from 166 Demographic and Health Surveys (DHS) to develop new empirically based methods of estimating under-five mortality using children ever born and children dead data. We then validated them using both in- and out-of-sample analyses. We developed a range of methods on the basis of three dimensions of the problem: (1) approximating the average length of exposure to mortality from a mother's set of children using either maternal age or time since first birth; (2) using cohort and period measures of the fraction of children ever born that are

  15. Validity assessment of the detection method of maize event Bt10 through investigation of its molecular structure.

    PubMed

    Milcamps, Anne; Rabe, Scott; Cade, Rebecca; De Framond, Anic J; Henriksson, Peter; Kramer, Vance; Lisboa, Duarte; Pastor-Benito, Susana; Willits, Michael G; Lawrence, David; Van den Eede, Guy

    2009-04-22

    In March 2005, U.S. authorities informed the European Commission of the inadvertent release of unauthorized maize GM event Bt10 in their market and subsequently the grain channel. In the United States measures were taken to eliminate Bt10 from seed and grain supplies; in the European Union an embargo for maize gluten and brewer's grain import was implemented unless certified of Bt10 absence with a Bt10-specific PCR detection method. With the aim of assessing the validity of the Bt10 detection method, an in-depth analysis of the molecular organization of the genetic modification of this event was carried out by both the company Syngenta, who produced the event, and the European Commission Joint Research Centre, who validated the detection method. Using a variety of molecular analytical tools, both organizations found the genetic modification of event Bt10 to be very complex in structure, with rearrangements, inversions, and multiple copies of the structural elements (cry1Ab, pat, and the amp gene), interspersed with small genomic maize fragments. Southern blot analyses demonstrated that all Bt10 elements were found tightly linked on one large fragment, including the region that would generate the event-specific PCR amplicon of the Bt10 detection method. This study proposes a hypothetical map of the insert of event Bt10 and concludes that the validated detection method for event Bt10 is fit for its purpose.

  16. Validation of an evacuated canister method for measuring part-per-billion levels of chemical warfare agent simulants.

    PubMed

    Coffey, Christopher C; LeBouf, Ryan F; Calvert, Catherine A; Slaven, James E

    2011-08-01

    The National Institute for Occupational Safety and Health (NIOSH) research on direct-reading instruments (DRIs) needed an instantaneous sampling method to provide independent confirmation of the concentrations of chemical warfare agent (CWA) simulants. It was determined that evacuated canisters would be the method of choice. There is no method specifically validated for volatile organic compounds (VOCs) in the NIOSH Manual of Analytical Methods. The purpose of this study was to validate an evacuated canister method for sampling seven specific VOCs that can be used as a simulant for CWA agents (cyclohexane) or influence the DRI measurement of CWA agents (acetone, chloroform, methylene chloride, methyl ethyl ketone, hexane, and carbon tetrachloride [CCl4]). The method used 6-L evacuated stainless-steel fused silica-lined canisters to sample the atmosphere containing VOCs. The contents of the canisters were then introduced into an autosampler/preconcentrator using a microscale purge and trap (MPT) method. The MPT method trapped and concentrated the VOCs in the air sample and removed most of the carbon dioxide and water vapor. After preconcentration, the samples were analyzed using a gas chromatograph with a mass selective detector. The method was tested, evaluated, and validated using the NIOSH recommended guidelines. The evaluation consisted of determining the optimum concentration range for the method; the sample stability over 30 days; and the accuracy, precision, and bias of the method. This method meets the NIOSH guidelines for six of the seven compounds (excluding acetone) tested in the range of 2.3-50 parts per billion (ppb), making it suitable for sampling of these VOCs at the ppb level.

  17. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  18. Validation of a spectrophotometric method for quantification of carboxyhemoglobin.

    PubMed

    Luchini, Paulo D; Leyton, Jaime F; Strombech, Maria de Lourdes C; Ponce, Julio C; Jesus, Maria das Graças S; Leyton, Vilma

    2009-10-01

    The measurement of carboxyhemoglobin (COHb) levels in blood is a valuable procedure to confirm exposure to carbon monoxide (CO) either for forensic or occupational matters. A previously described method using spectrophotometric readings at 420 and 432 nm after reduction of oxyhemoglobin (O(2)Hb) and methemoglobin with sodium hydrosulfite solution leads to an exponential curve. This curve, used with pre-established factors, serves well for lower concentrations (1-7%) or for high concentrations (> 20%) but very rarely for both. The authors have observed that small variations on the previously described factors F1, F2, and F3, obtained from readings for 100% COHb and 100% O(2)Hb, turn into significant changes in COHb% results and propose that these factors should be determined every time COHb is measured by reading CO and O(2) saturated samples. This practice leads to an increase in accuracy and precision.

  19. [Ambulatory blood pressure: methods, equipment, technical problems, validations].

    PubMed

    Carré, A; Petetin, N; Fouquoire, B; Mounier-Vehier, C; Poncelet, P

    1991-09-01

    The measurement of ambulatory blood pressure provides a discontinuous recording which reflects the pressure load over a 24 hour period. The latest recorders allow the patient a relative autonomy due to discontinuous but programmable recording and the miniaturisation of the recorder and relative silence during inflation of the cuff. The main disadvantage of the technique is the necessity of interruption of the patient's physical activity at the moment of recording indicated by an audible "beep". The concept of "active pressure load" is therefore illusory. The traditional controversy between supporters of the auscultatory versus those for the oscillometric method is far from being settled and these discussions do not resolve the problem. The use of finger plethysmographic techniques (Finapress-Ohmeda) is an interesting approach but limited for the time being by the necessity of confinement to a laboratory and recordings of short durations requiring strict conditions of ambient temperature. Future developments using ultrasonic techniques may provide a solution to these problems.

  20. Validity of Eye Movement Methods and Indices for Capturing Semantic (Associative) Priming Effects

    ERIC Educational Resources Information Center

    Odekar, Anshula; Hallowell, Brooke; Kruse, Hans; Moates, Danny; Lee, Chao-Yang

    2009-01-01

    Purpose: The purpose of this investigation was to evaluate the usefulness of eye movement methods and indices as a tool for studying priming effects by verifying whether eye movement indices capture semantic (associative) priming effects in a visual cross-format (written word to semantically related picture) priming paradigm. Method: In the…

  1. Reliability-targeted HPLC-UV method validation--a protocol enrichment perspective.

    PubMed

    Dharuman, Joghee Gowder; Vasudevan, Mahalingam

    2014-02-01

    Method validation is important in analytical chemistry to obtain the reliability of an analytical method. Guidelines provided by the regulatory bodies can be used as a general framework to assess the validity of a method. Since these guidelines do not focus on the reliability of analytical results exclusively, this study was aimed to combine a few recently evolved strategies that may render analytical method validation more reliable and trustworthy. In this research, the analytical error function was determined by appropriate polynomial regression statistics that determine the range of analyte concentration that may lead to more accurate measurements by producing the least possible total error in the assay and can be regarded as a reliable weighting method. The reliability of the analytical results over a particular concentration range has been proposed by a Bayesian probability study. In order to ensure the applicability of this approach, it was applied for the validation of an HPLC-UV assay method dedicated to the quantification of cefepime and tazobactam in human plasma. A comparison between the newer approach and the usual method validation revealed that the application of analytical error function and Bayesian analysis at the end of the validation process can produce significant improvements in the analytical results.

  2. Content validity across methods of malnutrition assessment in patients with cancer is limited.

    PubMed

    Sealy, Martine J; Nijholt, Willemke; Stuiver, Martijn M; van der Berg, Marit M; Roodenburg, Jan L N; van der Schans, Cees P; Ottery, Faith D; Jager-Wittenaar, Harriët

    2016-08-01

    To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Systematic review of studies in cancer patients that operationalized malnutrition as a variable, published since 1998. Eleven key concepts, within the three domains reflected by the malnutrition definitions acknowledged by European Society for Clinical Nutrition and Metabolism (ESPEN) and the American Society for Parenteral and Enteral Nutrition (ASPEN): A: nutrient balance; B: changes in body shape, body area and body composition; and C: function, were used to classify content validity of methods to assess malnutrition. Content validity indices (M-CVIA-C) were calculated per assessment method. Acceptable content validity was defined as M-CVIA-C ≥ 0.80. Thirty-seven assessment methods were identified in the 160 included articles. Mini Nutritional Assessment (M-CVIA-C = 0.72), Scored Patient-Generated Subjective Global Assessment (M-CVIA-C = 0.61), and Subjective Global Assessment (M-CVIA-C = 0.53) scored highest M-CVIA-C. A large number of malnutrition assessment methods are used in cancer research. Content validity of these methods varies widely. None of these assessment methods has acceptable content validity, when compared against a construct based on ESPEN and ASPEN definitions of malnutrition. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Development and validity of a method for the evaluation of printed education material

    PubMed Central

    Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso

    Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924

  4. Convergent Validity of Three Methods for Measuring Postoperative Complications

    PubMed Central

    Fritz, Bradley A.; Escallier, Krisztina E.; Abdallah, Arbi Ben; Oberhaus, Jordan; Becker, Jennifer; Geczi, Kristin; McKinnon, Sherry; Helsten, Dan L.; Sharma, Anshuman; Wildes, Troy S.; Avidan, Michael S.

    2016-01-01

    Background Anesthesiologists need tools to accurately track postoperative outcomes. The accuracy of patient report in identifying a wide variety of postoperative complications after diverse surgical procedures has not previously been investigated. Methods In this cohort study, 1,578 adult surgical patients completed a survey at least 30 days after their procedure asking if they had experienced any of 18 complications while in the hospital after surgery. Patient responses were compared to the results of an automated electronic chart review and (for a random subset of 750 patients) to a manual chart review. Results from automated chart review were also compared to those from manual chart review. Forty-two randomly selected patients were contacted by telephone to explore reasons for discrepancies between patient report and manual chart review. Results Comparisons between patient report, automated chart review, and manual chart review demonstrated poor-to-moderate positive agreement (range, 0 to 58%) and excellent negative agreement (range, 82 to 100%). Discordance between patient report and manual chart review was frequently explicable by patients reporting events that happened outside the time period of interest. Conclusions Patient report can provide information about subjective experiences or events that happen after hospital discharge, but often yields different results from chart review for specific in-hospital complications. Effective in-hospital communication with patients and thoughtful survey design may increase the quality of patient-reported complication data. PMID:27028469

  5. The Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM): a review of the ICCVAM test method evaluation process and current international collaborations with the European Centre for the Validation of Alternative Methods (ECVAM).

    PubMed

    Stokes, William S; Schechtman, Leonard M; Hill, Richard N

    2002-12-01

    Over the last decade, national authorities in the USA and Europe have launched initiatives to validate new and improved toxicological test methods. In the USA, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and its supporting National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) were established by the Federal Government to work with test developers and Federal agencies to facilitate the validation, review, and adoption of new scientifically sound test methods, including alternatives that can refine, reduce, and replace animal use. In Europe, the European Centre for the Validation of Alternative Methods (ECVAM) was established to conduct validation studies on alternative test methods. Despite differences in organisational structure and processes, both organisations seek to achieve the adoption and use of alternative test methods. Accordingly, both have adopted similar validation and regulatory acceptance criteria. Collaborations and processes have also evolved to facilitate the international adoption of new test methods recommended by ECVAM and ICCVAM. These collaborations involve the sharing of expertise and data for test-method workshops and independent scientific peer reviews, and the adoption of processes to expedite the consideration of test methods already reviewed by the other organisation. More recently, NICEATM and ECVAM initiated a joint international validation study on in vitro methods for assessing acute systemic toxicity. These collaborations are expected to contribute to accelerated international adoption of harmonised new test methods that will support improved public health and provide for reduced and more-humane use of laboratory animals.

  6. Single Lab Validation of a LC/UV/FLD/MS Method for Simultaneous Determination of Water-soluble Vitamins in Multi-Vitamin Dietary Supplements

    USDA-ARS?s Scientific Manuscript database

    The purpose of this study was to develop a Single-Lab Validated Method using high-performance liquid chromatography (HPLC) with different detectors (diode array detector - DAD, fluorescence detector - FLD, and mass spectrometer - MS) for determination of seven B-complex vitamins (B1 - thiamin, B2 – ...

  7. Development and Validation of a Cultural Method for the Detection and Isolation of Salmonella in Cloves.

    PubMed

    Zhang, Guodong; Ali, Laila; Gill, Vikas; Tatavarthy, Aparna; Deng, Xiaohong; Hu, Lijun; Brown, Eric W; Hammack, Thomas S

    2017-03-01

    Detection of Salmonella in some spices, such as cloves, remains a challenge due to their inherent antimicrobial properties. The purpose of this study was to develop an effective detection method for Salmonella from spices using cloves as a model. Two clove varieties, Ceylon and Madagascar, were used in the study. Cloves were inoculated with Salmonella enterica subsp. enterica serotypes Montevideo, Typhimurium, or Weltevreden at about 1, 3, or 6 log CFU/25 g. Two test portion sizes, 10 and 25 g, were compared. After adding Trypticase soy broth (TSB) to the weighed cloves for preenrichment, three preenrichment methods were compared: cloves were left in the TSB for 24 h during preenrichment (PreE1), or the cloves-TSB mixture was shaken vigorously for 30 s (PreE2) or 60 s (PreE3), and the decanted material was transferred to a new bag for 24 h of preenrichment. The rest of the procedures were carried out according to the U.S. Food and Drug Administration Bacteriological Analytical Manual (BAM). At the low inoculation level (<1 log CFU/25 g), the detection rate was low across the three preenrichment methods, with the highest for PreE3 and lowest for PreE1. At the medium and high inoculation levels (3 and 6 log CFU/25 g), all samples from PreE2 and PreE3 were positive for Salmonella , whereas PreE1 produced only 12 positive samples from the 48 samples at the medium inoculation level and 38 positive samples from the 48 samples at the high inoculation level. Therefore, PreE3 with 25 g of cloves per sample was more effective than the other two tested methods. This newly designed method was then validated by comparing with the BAM method in six trials, with each trial consisting of 40 test samples. The results showed that PreE3 detected Salmonella from 88 of 120 inoculated test samples compared with only 31 positive from 120 test samples with the BAM method. Thus, our newly designed method PreE3 was more sensitive and easier to operate than the current BAM method for

  8. Bioanalytical method validation: concepts, expectations and challenges in small molecule and macromolecule--a report of PITTCON 2013 symposium.

    PubMed

    Bashaw, Edward D; DeSilva, Binodh; Rose, Mark J; Wang, Yow-Ming C; Shukla, Chinmay

    2014-05-01

    The concepts, importance, and implications of bioanalytical method validation has been discussed and debated for a long time. The recent high profile issues related to bioanalytical method validation at both Cetero Houston and former MDS Canada has brought this topic back in the limelight. Hence, a symposium on bioanalytical method validation with the aim of revisiting the building blocks as well as discussing the challenges and implications on the bioanalysis of both small molecules and macromolecules was featured at the PITTCON 2013 Conference and Expo. This symposium was cosponsored by the American Chemical Society (ACS)-Division of Analytical Chemistry and Analysis and Pharmaceutical Quality (APQ) Section of the American Association of Pharmaceutical Scientists (AAPS) and featured leading speakers from the Food & Drug Administration (FDA), academia, and industry. In this symposium, the speakers shared several unique examples, and this session also provided a platform to discuss the need for continuous vigilance of the bioanalytical methods during drug discovery and development. The purpose of this article is to provide a concise report on the materials that were presented.

  9. Development and assessment of disinfectant efficacy test methods for regulatory purposes.

    PubMed

    Tomasino, Stephen F

    2013-05-01

    The United States Environmental Protection Agency regulates pesticidal products, including products with antimicrobial activity. Test guidelines have been established to inform manufacturers of which methodology is appropriate to support a specific efficacy claim. This paper highlights efforts designed to improve current methods and the development and assessment of new test methods.

  10. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  11. Alternative Methods for Validating Admissions and Course Placement Criteria. AIR 1995 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Noble, Julie; Sawyer, Richard

    Correlational methods are compared to an alternative method based on decision theory and logistic regression for providing validity evidence for college admissions and course placement criteria. The advantages and limitations of both methods are examined. The correlation coefficient measures the strength of the linear statistical relationship…

  12. Defining the knee joint flexion-extension axis for purposes of quantitative gait analysis: an evaluation of methods.

    PubMed

    Schache, Anthony G; Baker, Richard; Lamoreux, Larry W

    2006-08-01

    Minimising measurement variability associated with hip axial rotation and avoiding knee joint angle cross-talk are two fundamental objectives of any method used to define the knee joint flexion-extension axis for purposes of quantitative gait analysis. The aim of this experiment was to compare three different methods of defining this axis: the knee alignment device (KAD) method, a method based on the transepicondylar axis (TEA) and an alternative numerical method (Dynamic). The former two methods are common approaches that have been applied clinically in many quantitative gait analysis laboratories; the latter is an optimisation procedure. A cohort of 20 subjects performed three different functional tasks (normal gait; squat; non-weight bearing knee flexion) on repeated occasions. Three-dimensional hip and knee angles were computed using the three alternative methods of defining the knee joint flexion-extension axis. The repeatability of hip axial rotation measurements during normal gait was found to be significantly better for the Dynamic method (p<0.01). Furthermore, both the variance in the knee varus-valgus kinematic profile and the degree of knee joint angle cross-talk were smallest for the Dynamic method across all functional tasks. The Dynamic method therefore provided superior results in comparison to the KAD and TEA-based methods and thus represents an attractive solution for orientating the knee joint flexion-extension axis for purposes of quantitative gait analysis.

  13. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  14. Evaluation of the quantitative performances of supercritical fluid chromatography: from method development to validation.

    PubMed

    Dispas, Amandine; Lebrun, Pierre; Ziemons, Eric; Marini, Roland; Rozet, Eric; Hubert, Philippe

    2014-08-01

    Recently, the number of papers about SFC increased drastically but scientists did not truly focus their work on quantitative performances of this technique. In order to prove the potential of UHPSFC, the present work discussed about the different steps of the analytical life cycle of a method: from development to validation and application. Moreover, the UHPSFC quantitative performances were evaluated in comparison with UHPLC, which is the main technique used for quality control in the pharmaceutical industry and then could be considered as a reference. The methods were developed using Design Space strategy, leading to the optimization of robust method. In this context, when the Design Space optimization shows guarantee of quality, no more robustness study is required prior to the validation. Then, the methods were geometrically transferred in order to reduce the analysis time. The UHPSFC and UHPLC methods were validated based on the total error approach using accuracy profile. Even if UHPLC showed better precision and sensitivity, UHPSFC method is able to give accurate results in a dosing range larger than the 80-120% range required by the European Medicines Agency. Consequently, UHPSFC results are valid and could be used for the control of active substance in a finished pharmaceutical product. Finally, UHPSFC validated method was used to analyse real samples and gave similar results than the reference method (UHPLC). Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Validation of Spectrophotometric Methods for the Determination of Total Polyphenol and Total Flavonoid Content.

    PubMed

    Matić, Petra; Sabljić, Marija; Jakobek, Lidija

    2017-07-20

    The aim of this study was to validate spectrophotometric methods for the measurement of total polyphenol (TP;via the Folin-Ciocalteu method) and total flavonoid (TF) content [via the aluminum chloride (AlCl₃) method]. Validation parameters of these methods were determined, including linearity, sensitivity, precision (intra-assay and intermediate), accuracy, LOD, and LOQ. For the validation process, groups of polyphenol standards were used, including phenolic acids (gallic, p-coumaric, caffeic, and chlorogenic acids), flavan-3-ols [(+)-catechin and procyanidins B1 and B2], flavonols (quercetin and quercetin-3-rutinoside), and dihydrochalcones (phloretin and phloretin-2-glucoside). Obtained validation parameters were within acceptable ranges with high determination coefficients, reasonably low LODs and LOQs, and high slopes in the calibration curves for both methods, except for phloretin and phloretin-2-glucoside, for which there were low slopes in the calibration curves for the AlCl₃ method. To evaluate differences in polyphenol content, the validated spectrophotometric methods were used to determine TP and TF content in wines (Plavac, Gra¡evina, and Vranac) and juices (blueberry, strawberry, and blackcurrant juice) according to the polyphenol calibration curves. Polyphenol contents were different for both methods in all wines and juices.

  16. From the Bronx to Bengifunda (and Other Lines of Flight): Deterritorializing Purposes and Methods in Science Education Research

    ERIC Educational Resources Information Center

    Gough, Noel

    2011-01-01

    In this essay I explore a number of questions about purposes and methods in science education research prompted by my reading of Wesley Pitts' ethnographic study of interactions among four students and their teacher in a chemistry classroom in the Bronx, New York City. I commence three "lines of flight" (small acts of Deleuzo-Guattarian…

  17. From the Bronx to Bengifunda (and Other Lines of Flight): Deterritorializing Purposes and Methods in Science Education Research

    ERIC Educational Resources Information Center

    Gough, Noel

    2011-01-01

    In this essay I explore a number of questions about purposes and methods in science education research prompted by my reading of Wesley Pitts' ethnographic study of interactions among four students and their teacher in a chemistry classroom in the Bronx, New York City. I commence three "lines of flight" (small acts of Deleuzo-Guattarian…

  18. 27 CFR 46.261 - Purpose of an alternate method or procedure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TOBACCO PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Alternate Methods Or Procedures § 46.261...

  19. 27 CFR 46.261 - Purpose of an alternate method or procedure.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TOBACCO PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Alternate Methods Or Procedures § 46.261...

  20. 27 CFR 46.261 - Purpose of an alternate method or procedure.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... TOBACCO PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Alternate Methods Or Procedures §...

  1. 27 CFR 46.261 - Purpose of an alternate method or procedure.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... TOBACCO PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Alternate Methods Or Procedures §...

  2. 27 CFR 46.261 - Purpose of an alternate method or procedure.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TOBACCO PRODUCTS AND CIGARETTE PAPERS AND TUBES Floor Stocks Tax on Certain Tobacco Products, Cigarette Papers, and Cigarette Tubes Held for Sale on April 1, 2009 Alternate Methods Or Procedures §...

  3. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  4. Development and Validation of UV-Visible Spectrophotometric Method for Simultaneous Determination of Eperisone and Paracetamol in Solid Dosage Form

    PubMed Central

    Khanage, Shantaram Gajanan; Mohite, Popat Baban; Jadhav, Sandeep

    2013-01-01

    Purpose: Eperisone Hydrochloride (EPE) is a potent new generation antispasmodic drug which is used in the treatment of moderate to severe pain in combination with Paracetamol (PAR). Both drugs are available in tablet dosage form in combination with a dose of 50 mg for EPE and 325 mg PAR respectively. Methods: The method is based upon Q-absorption ratio method for the simultaneous determination of the EPE and PAR. Absorption ratio method is used for the ratio of the absorption at two selected wavelength one of which is the iso-absorptive point and other being the λmax of one of the two components. EPE and PAR shows their iso-absorptive point at 260 nm in methanol, the second wavelength used is 249 nm which is the λmax of PAR in methanol. Results: The linearity was obtained in the concentration range of 5-25 μg/mL for EPE and 2-10 μg/mL for PAR. The proposed method was effectively applied to tablet dosage form for estimation of both drugs. The accuracy and reproducibility results are close to 100% with 2% RSD. Results of the analysis were validated statistically and found to be satisfactory. The results of proposed method have been validated as per ICH guidelines. Conclusion: A simple, precise and economical spectrophotometric method has been developed for the estimation of EPE and PAR in pharmaceutical formulation. PMID:24312876

  5. Validation of the greater trochanter method with radiographic measurements of frontal plane hip joint centers and knee mechanical axis angles and two other hip joint center methods.

    PubMed

    Bennett, Hunter J; Shen, Guangping; Weinhandl, Joshua T; Zhang, Songning

    2016-09-06

    Several motion capture methods exist for predicting hip joint centers (HJC). These methods include regression models, functional joints, and projections from greater trochanters. While regression and functional methods have been compared to imaging techniques, the TROCH method has not been previously validated. The purpose of this study was to compare frontal-plane HJCs and knee mechanical axis angles estimated using the greater trochanter method with a regression (Bell) and a functional method against those obtained using radiographs. Thirty-five participants underwent a long-standing anteroposterior radiograph, and performed static and functional motion capture trials. The Bell, functional, and trochanter HJCs were constructed to predict mechanical axes and compare HJC locations. One-way repeated measures ANOVAs were used to compare mechanical axes and HJC locations estimated by motion capture methods and measured using radiographs (p<0.05). All methods overestimated mechanical axes compared to radiographs (<2°), but were not different. Mediolateral HJC locations and inter-HJC widths were similar between methods; however, inter-HJC widths were underestimated (average 3.7%) compared to radiographs. The Bell HJC was more superior and anterior to both functional and trochanter methods. The trochanter HJC was more posterior to both methods. The Bell method outperformed the other methods in leg length predictions compared to radiographs. Although differences existed between methods, all frontal-plane HJC location differences were <1.7cm. This study validated the trochanter HJC prediction method mediolaterally and vertically (with small respective correction factors). Therefore, all HJC methods seem to be viable in predicting mechanical axes and frontal-plane HJC locations compared with radiographs.

  6. Retrospective data analysis and proposal of a practical acceptance criterion for inter-laboratory cross-validation of bioanalytical methods using liquid chromatography/tandem mass spectrometry.

    PubMed

    Yoneyama, Tomoki; Kudo, Takashi; Jinno, Fumihiro; Schmidt, Eric R; Kondo, Takahiro

    2014-11-01

    The purpose of this study is to conduct a retrospective data analysis for inter-laboratory cross-validation studies to set a reasonable and practical acceptance criterion based on a number of cross-validation results. From the results of cross-validation studies for 16 compounds and their metabolites, analytical bias and variation were evaluated. The accuracy of cross-validation samples was compared with that of quality control (QC) samples with statistical comparison of the analytical variation. An acceptance criterion was derived with a confidential interval approach. As the results, while a larger bias was observed for the cross-validation samples, the bias was not fully caused by analytical variation or bias attributable to the analytical methods. The direction of the deviation between the cross-validation samples and QC samples was random and not concentration-dependent, suggesting that inter-laboratory variability such as preparation errors could be a source of bias. A derived acceptance criterion corresponds to one prescribed in the Guideline on bioanalytical method validation from the Ministry of Health, Labour and Welfare in Japan and is a little wider than one in the European Medical Agency. In conclusion, thorough retrospective data analysis revealed potential causes of larger analytical bias in inter-laboratory cross-validation studies. A derived acceptance criterion would be practical and reasonable for the inter-laboratory cross-validation study.

  7. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible.

  8. A Comparative Study of Information-Based Source Number Estimation Methods and Experimental Validations on Mechanical Systems

    PubMed Central

    Cheng, Wei; Zhang, Zhousuo; Cao, Hongrui; He, Zhengjia; Zhu, Guanwen

    2014-01-01

    This paper investigates one eigenvalue decomposition-based source number estimation method, and three information-based source number estimation methods, namely the Akaike Information Criterion (AIC), Minimum Description Length (MDL) and Bayesian Information Criterion (BIC), and improves BIC as Improved BIC (IBIC) to make it more efficient and easier for calculation. The performances of the abovementioned source number estimation methods are studied comparatively with numerical case studies, which contain a linear superposition case and a both linear superposition and nonlinear modulation mixing case. A test bed with three sound sources is constructed to test the performances of these methods on mechanical systems, and source separation is carried out to validate the effectiveness of the experimental studies. This work can benefit model order selection, complexity analysis of a system, and applications of source separation to mechanical systems for condition monitoring and fault diagnosis purposes. PMID:24776935

  9. A comparative study of information-based source number estimation methods and experimental validations on mechanical systems.

    PubMed

    Cheng, Wei; Zhang, Zhousuo; Cao, Hongrui; He, Zhengjia; Zhu, Guanwen

    2014-04-25

    This paper investigates one eigenvalue decomposition-based source number estimation method, and three information-based source number estimation methods, namely the Akaike Information Criterion (AIC), Minimum Description Length (MDL) and Bayesian Information Criterion (BIC), and improves BIC as Improved BIC (IBIC) to make it more efficient and easier for calculation. The performances of the abovementioned source number estimation methods are studied comparatively with numerical case studies, which contain a linear superposition case and a both linear superposition and nonlinear modulation mixing case. A test bed with three sound sources is constructed to test the performances of these methods on mechanical systems, and source separation is carried out to validate the effectiveness of the experimental studies. This work can benefit model order selection, complexity analysis of a system, and applications of source separation to mechanical systems for condition monitoring and fault diagnosis purposes.

  10. Drug Target Validation Methods in Malaria - Protein Interference Assay (PIA) as a Tool for Highly Specific Drug Target Validation.

    PubMed

    Meissner, Kamila A; Lunev, Sergey; Wang, Yuan-Ze; Linzke, Marleen; de Assis Batista, Fernando; Wrenger, Carsten; Groves, Matthew R

    2017-01-01

    The validation of drug targets in malaria and other human diseases remains a highly difficult and laborious process. In the vast majority of cases, highly specific small molecule tools to inhibit a proteins function in vivo are simply not available. Additionally, the use of genetic tools in the analysis of malarial pathways is challenging. These issues result in difficulties in specifically modulating a hypothetical drug target's function in vivo. The current "toolbox" of various methods and techniques to identify a protein's function in vivo remains very limited and there is a pressing need for expansion. New approaches are urgently required to support target validation in the drug discovery process. Oligomerisation is the natural assembly of multiple copies of a single protein into one object and this self-assembly is present in more than half of all protein structures. Thus, oligomerisation plays a central role in the generation of functional biomolecules. A key feature of oligomerisation is that the oligomeric interfaces between the individual parts of the final assembly are highly specific. However, these interfaces have not yet been systematically explored or exploited to dissect biochemical pathways in vivo. This mini review will describe the current state of the antimalarial toolset as well as the potentially druggable malarial pathways. A specific focus is drawn to the initial efforts to exploit oligomerisation surfaces in drug target validation. As alternative to the conventional methods, Protein Interference Assay (PIA) can be used for specific distortion of the target protein function and pathway assessment in vivo. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Measuring step geometry using the Nose-to-Nose method: validity and repeatability.

    PubMed

    Johnson, Daniel; Sloan, Gary

    2012-01-01

    Precise measurement of stairway geometry is important in order to establish whether a stairway meets design goals, standards and codes. The Traditional method of measuring risers and runs (goings) does not measure the stairs in the way that codes define risers and runs, and does not measure the stairs in the way users experience them. The Nose-to-Nose method does. This study was conducted to determine the validity and repeatability of this method. Two researchers used this method to study the risers and runs on two separate stairways, one carpeted and the other not. Results demonstrated a high degree of validity and repeatability.

  12. Application and interpretation of multiple statistical tests to evaluate validity of dietary intake assessment methods.

    PubMed

    Lombard, Martani J; Steyn, Nelia P; Charlton, Karen E; Senekal, Marjanne

    2015-04-22

    Several statistical tests are currently applied to evaluate validity of dietary intake assessment methods. However, they provide information on different facets of validity. There is also no consensus on types and combinations of tests that should be applied to reflect acceptable validity for intakes. We aimed to 1) conduct a review to identify the tests and interpretation criteria used where dietary assessment methods was validated against a reference method and 2) illustrate the value of and challenges that arise in interpretation of outcomes of multiple statistical tests in assessment of validity using a test data set. An in-depth literature review was undertaken to identify the range of statistical tests used in the validation of quantitative food frequency questionnaires (QFFQs). Four databases were accessed to search for statistical methods and interpretation criteria used in papers focusing on relative validity. The identified tests and interpretation criteria were applied to a data set obtained using a QFFQ and four repeated 24-hour recalls from 47 adults (18-65 years) residing in rural Eastern Cape, South Africa. 102 studies were screened and 60 were included. Six statistical tests were identified; five with one set of interpretation criteria and one with two sets of criteria, resulting in seven possible validity interpretation outcomes. Twenty-one different combinations of these tests were identified, with the majority including three or less tests. Coefficient of correlation was the most commonly used (as a single test or in combination with one or more tests). Results of our application and interpretation of multiple statistical tests to assess validity of energy, macronutrients and selected micronutrients estimates illustrate that for most of the nutrients considered, some outcomes support validity, while others do not. One to three statistical tests may not be sufficient to provide comprehensive insights into various facets of validity. Results of our

  13. A novel method to derive amniotic fluid stem cells for therapeutic purposes

    PubMed Central

    2010-01-01

    Background Human amniotic fluid stem (hAFS) cells have become an attractive stem cell source for medical therapy due to both their ability to propagate as stem cells and the lack of ethical debate that comes with the use of embryonic stem cells. Although techniques to derive stem cells from amniotic fluid are available, the techniques have limitations for clinical uses, including a requirement of long periods of time for stem cell production, population heterogeneity and xeno-contamination from using animal antibody-coated magnetic beads. Herein we describe a novel isolation method that fits for hAFS derivation for cell-based therapy. Methods and Results With our method, single hAFS cells generate colonies in a primary culture of amniotic fluid cells. Individual hAFS colonies are then expanded by subculturing in order to make a clonal hAFS cell line. This method allows derivation of a substantial amount of a pure stem cell population within a short period of time. Indeed, 108 cells from a clonal hAFS line can be derived in two weeks using our method, while previous techniques require two months. The resultant hAFS cells show a 2-5 times greater proliferative ability than with previous techniques and a population doubling time of 0.8 days. The hAFS cells exhibit typical hAFS cell characteristics including the ability to differentiate into adipogenic-, osteogenic- and neurogenic lineages, expression of specific stem cell markers including Oct4, SSEA4, CD29, CD44, CD73, CD90, CD105 and CD133, and maintenance of a normal karyotype over long culture periods. Conclusions We have created a novel hAFS cell derivation method that can produce a vast amount of high quality stem cells within a short period of time. Our technique makes possibility for providing autogenic fetal stem cells and allogeneic cells for future cell-based therapy. PMID:20955626

  14. Method development and validation of potent pyrimidine derivative by UV-VIS spectrophotometer.

    PubMed

    Chaudhary, Anshu; Singh, Anoop; Verma, Prabhakar Kumar

    2014-12-01

    A rapid and sensitive ultraviolet-visible (UV-VIS) spectroscopic method was developed for the estimation of pyrimidine derivative 6-Bromo-3-(6-(2,6-dichlorophenyl)-2-(morpolinomethylamino) pyrimidine4-yl) -2H-chromen-2-one (BT10M) in bulk form. Pyrimidine derivative was monitored at 275 nm with UV detection, and there is no interference of diluents at 275 nm. The method was found to be linear in the range of 50 to 150 μg/ml. The accuracy and precision were determined and validated statistically. The method was validated as a guideline. The results showed that the proposed method is suitable for the accurate, precise, and rapid determination of pyrimidine derivative. Graphical Abstract Method development and validation of potent pyrimidine derivative by UV spectroscopy.

  15. Development and validation of an ionic chromatography method for the determination of nitrate, nitrite and chloride in meat.

    PubMed

    Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M

    2016-03-01

    The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results.

  16. A Classroom-Based Assessment Method to Test Speaking Skills in English for Specific Purposes

    ERIC Educational Resources Information Center

    Alberola Colomar, María Pilar

    2014-01-01

    This article presents and analyses a classroom-based assessment method to test students' speaking skills in a variety of professional settings in tourism. The assessment system has been implemented in the Communication in English for Tourism course, as part of the Tourism Management degree programme, at Florida Universitaria (affiliated to the…

  17. Comparison of different mass transport calculation methods for wind erosion quantification purposes

    USDA-ARS?s Scientific Manuscript database

    Quantitative estimation of the material transported by the wind is essential in the study and control of wind erosion, although methods for its calculation are still controversial. Sampling the dust cloud at discrete heights, fitting an equation to the data, and integrating this equation from the so...

  18. Comparative analysis of rodent tissue preservation methods and nucleic acid extraction techniques for virus screening purposes.

    PubMed

    Yama, Ines N; Garba, Madougou; Britton-Davidian, Janice; Thiberville, Simon-Djamel; Dobigny, Gauthier; Gould, Ernest A; de Lamballerie, Xavier; Charrel, Remi N

    2013-05-01

    The polymerase chain reaction (PCR) has become an essential method for the detection of viruses in tissue specimens. However, it is well known that the presence of PCR inhibitors in tissue samples may cause false-negative results. Hence the identification of PCR inhibitors and evaluation and optimization of nucleic acid extraction and preservation methods is of prime concern in virus discovery programs dealing with animal tissues. Accordingly, to monitor and remove inhibitors we have performed comparative analyses of two commonly used tissue storage methods and five RNA purification techniques using a variety of animal tissues, containing quantified levels of added MS2 bacteriophages as the indicator of inhibition. The results showed (i) no significant difference between the two methods of sample preservation, viz. direct storage at -80°C or 4°C in RNAlater, (ii) lung rodent tissues contained lower levels of inhibitor than liver, kidney and spleen, (iii) RNA extraction using the EZ1+PK RNA kit was the most effective procedure for removal of RT-PCR inhibitors.

  19. Citizen Decision Making, Reflective Thinking and Simulation Gaming: A Marriage of Purpose, Method and Strategy.

    ERIC Educational Resources Information Center

    White, Charles S.

    1985-01-01

    A conception of citizen decision making based on participatory democratic theory is most likely to foster effective citizenship. An examination of social studies traditions suggests that reflective thinking as a teaching method is congenial to this conception. Simulation gaming is a potentially powerful instructional strategy for supporting…

  20. A Classroom-Based Assessment Method to Test Speaking Skills in English for Specific Purposes

    ERIC Educational Resources Information Center

    Alberola Colomar, María Pilar

    2014-01-01

    This article presents and analyses a classroom-based assessment method to test students' speaking skills in a variety of professional settings in tourism. The assessment system has been implemented in the Communication in English for Tourism course, as part of the Tourism Management degree programme, at Florida Universitaria (affiliated to the…

  1. A novel method to derive amniotic fluid stem cells for therapeutic purposes.

    PubMed

    Phermthai, Tatsanee; Odglun, Yuparat; Julavijitphong, Suphakde; Titapant, Vitaya; Chuenwattana, Prakong; Vantanasiri, Chanchai; Pattanapanyasat, Kovit

    2010-10-19

    Human amniotic fluid stem (hAFS) cells have become an attractive stem cell source for medical therapy due to both their ability to propagate as stem cells and the lack of ethical debate that comes with the use of embryonic stem cells. Although techniques to derive stem cells from amniotic fluid are available, the techniques have limitations for clinical uses, including a requirement of long periods of time for stem cell production, population heterogeneity and xeno-contamination from using animal antibody-coated magnetic beads. Herein we describe a novel isolation method that fits for hAFS derivation for cell-based therapy. With our method, single hAFS cells generate colonies in a primary culture of amniotic fluid cells. Individual hAFS colonies are then expanded by subculturing in order to make a clonal hAFS cell line. This method allows derivation of a substantial amount of a pure stem cell population within a short period of time. Indeed, 108 cells from a clonal hAFS line can be derived in two weeks using our method, while previous techniques require two months. The resultant hAFS cells show a 2-5 times greater proliferative ability than with previous techniques and a population doubling time of 0.8 days. The hAFS cells exhibit typical hAFS cell characteristics including the ability to differentiate into adipogenic-, osteogenic- and neurogenic lineages, expression of specific stem cell markers including Oct4, SSEA4, CD29, CD44, CD73, CD90, CD105 and CD133, and maintenance of a normal karyotype over long culture periods. We have created a novel hAFS cell derivation method that can produce a vast amount of high quality stem cells within a short period of time. Our technique makes possibility for providing autogenic fetal stem cells and allogeneic cells for future cell-based therapy.

  2. Validation of an in-vivo proton beam range check method in an anthropomorphic pelvic phantom using dose measurements

    PubMed Central

    Bentefour, El H.; Tang, Shikui; Cascio, Ethan W.; Testa, Mauro; Samuel, Deepak; Prieels, Damien; Gottschalk, Bernard; Lu, Hsiao-Ming

    2015-01-01

    Purpose: In-vivo dosimetry and beam range verification in proton therapy could play significant role in proton treatment validation and improvements. In-vivo beam range verification, in particular, could enable new treatment techniques one of which could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. This paper reports validation study of an in-vivo range verification method which can reduce the range uncertainty to submillimeter levels and potentially allow for in-vivo dosimetry. Methods: An anthropomorphic pelvic phantom is used to validate the clinical potential of the time-resolved dose method for range verification in the case of prostrate treatment using range modulated anterior proton beams. The method uses a 3 × 4 matrix of 1 mm diodes mounted in water balloon which are read by an ADC system at 100 kHz. The method is first validated against beam range measurements by dose extinction measurements. The validation is first completed in water phantom and then in pelvic phantom for both open field and treatment field configurations. Later, the beam range results are compared with the water equivalent path length (WEPL) values computed from the treatment planning system XIO. Results: Beam range measurements from both time-resolved dose method and the dose extinction method agree with submillimeter precision in water phantom. For the pelvic phantom, when discarding two of the diodes that show sign of significant range mixing, the two methods agree with ±1 mm. Only a dose of 7 mGy is sufficient to achieve this result. The comparison to the computed WEPL by the treatment planning system (XIO) shows that XIO underestimates the protons beam range. Quantifying the exact XIO range underestimation depends on the strategy used to evaluate the WEPL results. To our best evaluation, XIO underestimates the treatment beam range between a minimum of 1.7% and maximum of 4.1%. Conclusions: Time-resolved dose

  3. Validation of an in-vivo proton beam range check method in an anthropomorphic pelvic phantom using dose measurements

    SciTech Connect

    Bentefour, El H. Prieels, Damien; Tang, Shikui; Cascio, Ethan W.; Testa, Mauro; Lu, Hsiao-Ming; Samuel, Deepak; Gottschalk, Bernard

    2015-04-15

    Purpose: In-vivo dosimetry and beam range verification in proton therapy could play significant role in proton treatment validation and improvements. In-vivo beam range verification, in particular, could enable new treatment techniques one of which could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. This paper reports validation study of an in-vivo range verification method which can reduce the range uncertainty to submillimeter levels and potentially allow for in-vivo dosimetry. Methods: An anthropomorphic pelvic phantom is used to validate the clinical potential of the time-resolved dose method for range verification in the case of prostrate treatment using range modulated anterior proton beams. The method uses a 3 × 4 matrix of 1 mm diodes mounted in water balloon which are read by an ADC system at 100 kHz. The method is first validated against beam range measurements by dose extinction measurements. The validation is first completed in water phantom and then in pelvic phantom for both open field and treatment field configurations. Later, the beam range results are compared with the water equivalent path length (WEPL) values computed from the treatment planning system XIO. Results: Beam range measurements from both time-resolved dose method and the dose extinction method agree with submillimeter precision in water phantom. For the pelvic phantom, when discarding two of the diodes that show sign of significant range mixing, the two methods agree with ±1 mm. Only a dose of 7 mGy is sufficient to achieve this result. The comparison to the computed WEPL by the treatment planning system (XIO) shows that XIO underestimates the protons beam range. Quantifying the exact XIO range underestimation depends on the strategy used to evaluate the WEPL results. To our best evaluation, XIO underestimates the treatment beam range between a minimum of 1.7% and maximum of 4.1%. Conclusions: Time-resolved dose

  4. Validation of high-performance thin-layer chromatographic methods for the identification of botanicals in a cGMP environment.

    PubMed

    Reich, Eike; Schibli, Anne; DeBatt, Alison

    2008-01-01

    Current Good Manufacturing Practices (cGMP) for botanicals stipulates the use of appropriate methods for identification of raw materials. Due to natural variability, chemical analysis of plant material is a great challenge and requires special approaches. This paper presents a comprehensive proposal to the process of validating qualitative high-performance thin-layer chromatographic (HPTLC) methods, proving that such methods are suitable for the purpose. The steps of the validation process are discussed and illustrated with examples taken from a project aiming at validation of methods for identification of green tea leaf, ginseng root, eleuthero root, echinacea root, black cohosh rhizome, licorice root, kava root, milk thistle aerial parts, feverfew aerial parts, and ginger root. The appendix of the paper, which includes complete documentation and method write-up for those plants, is available on the J. AOAC Int. Website (http://www.atypon-link.com/AOAC/loi/jaoi).

  5. Validation of High-Performance Thin-Layer Chromatographic Methods for the Identification of Botanicals in a cGMP Environment

    PubMed Central

    REICH, EIKE; SCHIBLI, ANNE; DEBATT, ALISON

    2009-01-01

    Current Good Manufacturing Practices (cGMP) for botanicals stipulates the use of appropriate methods for identification of raw materials. Due to natural variability, chemical analysis of plant material is a great challenge and requires special approaches. This paper presents a comprehensive proposal to the process of validating qualitative high-performance thin-layer chromatographic (HPTLC) methods, proving that such methods are suitable for the purpose. The steps of the validation process are discussed and illustrated with examples taken from a project aiming at validation of methods for identification of green tea leaf, ginseng root, eleuthero root, echinacea root, black cohosh rhizome, licorice root, kava root, milk thistle aerial parts, feverfew aerial parts, and ginger root. The appendix of the paper, which includes complete documentation and method write-up for those plants, is available on the J. AOAC Int. Website (http://www.atypon-link.com/AOAC/loi/jaoi). PMID:18376581

  6. Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method

    NASA Astrophysics Data System (ADS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami

    We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.

  7. Application of EU guidelines for the validation of screening methods for veterinary drugs.

    PubMed

    Stolker, Alida A M Linda

    2012-08-01

    Commission Decision (CD) 2002/657/EC describes detailed rules for method validation within the framework of residue monitoring programmes. The approach described in this CD is based on criteria. For (qualitative) screening methods, the most important criteria is that the CCβ has to be below any regulatory limit. Especially when microbiological or immunochemical methods are involved, the approach described in the CD is not easily applied. For example, by those methods, a large number of analytes (all antibiotics) within several different matrices (meat, milk, fish, eggs, etc.) are detected. It is not completely clear whether all those analytes and all matrices have to be taken into account during method validation. To clarify this, a working group - from EU Reference Laboratories - came up with a practical approach to validate multi-analyte multi-matrix screening methods. It describes how many analyte/matrix combinations have to be tested and how these combinations are selected. Furthermore it describes how to determine CCβ for screening methods in relation to a large list of compounds and maximum residue limits (MRLs). First for each analyte/matrix combination the 'cut-off' level - i.e. the level at which the method separates blanks from contaminated samples - is established. The validation is preferably at the concentration of 50% of the regulatory limit. A minimum set of 20 different samples has to be tested. From the experiences with applying these guidelines it was concluded that the validation approach is very 'practical'; however, there are some remarks. One has to be careful with selecting 'representative' analytes and matrices and it is strongly recommended to collect additional validation data during the routine application of the method. © 2012 RIKILT-Wageningen University and Research. Drug Testing and Analysis © 2012 John Wiley & Sons, Ltd.

  8. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  9. Validation of a LC method for the analysis of zafirlukast in a pharmaceutical formulation.

    PubMed

    Ficarra, R; Ficarra, P; Tommasini, S; Melardi, S; Calabrò, M L; Furlanetto, S; Semreen, M

    2000-08-01

    A reversed-phase high-performance liquid chromatographic (HPLC) method was developed and validated for estimation of zafirlukast in a pharmaceutical formulation. Assay samples were extracted utilizing acetonitrile. Drug and internal standard were chromatographed on reversed-phase C18 columns, using mixtures of acetonitrile/water and the eluents were monitored at different wavelengths. The method was validated statistically for its linearity, accuracy, robustness and precision. Experimental design was used during validation to evaluate method robustness and for the determination of intermediate precision. Factors examined for statistical approaches include laboratory, day, analyst, instrument, different percentage of organic modifier, temperature, wavelength and flow-rate. Due to its simplicity and accuracy, the method may be used for routine quality control analysis.

  10. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  11. Titanium oxide thin films obtained with physical and chemical vapour deposition methods for optical biosensing purposes.

    PubMed

    Dominik, M; Leśniewski, A; Janczuk, M; Niedziółka-Jönsson, J; Hołdyński, M; Wachnicki, Ł; Godlewski, M; Bock, W J; Śmietana, M

    2017-07-15

    This work discusses an application of titanium oxide (TiOx) thin films deposited using physical (reactive magnetron sputtering, RMS) and chemical (atomic layer deposition, ALD) vapour deposition methods as a functional coating for label-free optical biosensors. The films were applied as a coating for two types of sensors based on the localised surface plasmon resonance (LSPR) of gold nanoparticles deposited on a glass plate and on a long-period grating (LPG) induced in an optical fibre. Optical and structural properties of the TiOx thin films were investigated and discussed. It has been found that deposition method has a significant influence on optical properties and composition of the films, but negligible impact on TiOx surface silanization effectiveness. A higher content of oxygen with lower Ti content in the ALD films leads to the formation of layers with higher refractive index and slightly higher extinction coefficient than for the RMS TiOx. Moreover, application of the TiOx film independently on deposition method enables not only for tuning of the spectral response of the investigated biosensors, but also in case of LSPR for enhancing the ability for biofunctionalization, i.e., TiOx film mechanically protects the nanoparticles and induces change in the biofunctionalization procedure to the one typical for oxides. TiOx coated LSPR and LPG sensors with refractive index sensitivity of close to 30 and 3400nm/RIU, respectively, were investigated. The ability for molecular recognition was evaluated with the well-known complex formation between avidin and biotin as a model system. The shift in resonance wavelength reached 3 and 13.2nm in case of LSPR and LPG sensors, respectively. Any modification in TiOx properties resulting from the biofunctionalization process can be also clearly detected.

  12. Multiple methods, maps, and management applications: Purpose made seafloor maps in support of ocean management

    NASA Astrophysics Data System (ADS)

    Brown, Craig J.; Sameoto, Jessica A.; Smith, Stephen J.

    2012-08-01

    The establishment of multibeam echosounders (MBES) as a mainstream tool in ocean mapping has facilitated integrative approaches toward nautical charting, benthic habitat mapping, and seafloor geotechnical surveys. The inherent bathymetric and backscatter information generated by MBES enables marine scientists to present highly accurate bathymetric data with a spatial resolution closely matching that of terrestrial mapping. Furthermore, developments in data collection and processing of MBES backscatter, combined with the quality of the co-registered depth information, have resulted in the increasing preferential use of multibeam technology over conventional sidescan sonar for the production of benthic habitat maps. A range of post-processing approaches can generate customized map products to meet multiple ocean management needs, thus extracting maximum value from a single survey data set. Based on recent studies over German Bank off SW Nova Scotia, Canada, we show how primary MBES bathymetric and backscatter data, along with supplementary data (i.e. in situ video and stills), were processed using a variety of methods to generate a series of maps. Methods conventionally used for classification of multi-spectral data were tested for classification of the MBES data set to produce a map summarizing broad bio-physical characteristics of the seafloor (i.e. a benthoscape map), which is of value for use in many aspects of marine spatial planning. A species-specific habitat map for the sea scallop Placopecten magellanicus was also generated from the MBES data by applying a Species Distribution Modeling (SDM) method to spatially predict habitat suitability, which offers tremendous promise for use in fisheries management. In addition, we explore the challenges of incorporating benthic community data into maps based on species information derived from a large number of seafloor photographs. Through the process of applying multiple methods to generate multiple maps for

  13. Assembly for collecting samples for purposes of identification or analysis and method of use

    DOEpatents

    Thompson, Cyril V [Knoxville, TN; Smith, Rob R [Knoxville, TN

    2010-02-02

    An assembly and an associated method for collecting a sample of material desired to be characterized with diagnostic equipment includes or utilizes an elongated member having a proximal end with which the assembly is manipulated by a user and a distal end. In addition, a collection tip which is capable of being placed into contact with the material to be characterized is supported upon the distal end. The collection tip includes a body of chemically-inert porous material for binding a sample of material when the tip is placed into contact with the material and thereby holds the sample of material for subsequent introduction to the diagnostic equipment.

  14. An experimental validation method for questioning techniques that assess sensitive issues.

    PubMed

    Moshagen, Morten; Hilbig, Benjamin E; Erdfelder, Edgar; Moritz, Annie

    2014-01-01

    Studies addressing sensitive issues often yield distorted prevalence estimates due to socially desirable responding. Several techniques have been proposed to reduce this bias, including indirect questioning, psychophysiological lie detection, and bogus pipeline procedures. However, the increase in resources required by these techniques is warranted only if there is a substantial increase in validity as compared to direct questions. Convincing demonstration of superior validity necessitates the availability of a criterion reflecting the "true" prevalence of a sensitive attribute. Unfortunately, such criteria are notoriously difficult to obtain, which is why validation studies often proceed indirectly by simply comparing estimates obtained with different methods. Comparative validation studies, however, provide weak evidence only since the exact increase in validity (if any) remains unknown. To remedy this problem, we propose a simple method that allows for measuring the "true" prevalence of a sensitive behavior experimentally. The basic idea is to elicit normatively problematic behavior in a way that ensures conclusive knowledge of the prevalence rate of this behavior. This prevalence measure can then serve as an external validation criterion in a second step. An empirical demonstration of this method is provided.

  15. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  16. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  17. Validation of an extraction method for Cry1Ab protein from soil.

    PubMed

    Mueting, Sara A; Strain, Katherine E; Lydy, Michael J

    2014-01-01

    Corn expressing insecticidal proteins derived from Bacillus thuringiensis (Bt corn) has increased in usage in the United States from 8% of total corn acreage in 1996 to 67% in 2012. Because of this increase, it is important to be able to monitor the fate and transport of the insecticidal Bt proteins to evaluate environmental exposure and effects. Accurate and validated methods are needed to quantify these proteins in environmental matrices. A method to extract Bt Cry1Ab proteins from 3 soil types using a 10× phosphate-buffered saline with Tween buffer and a commercially available enzyme-linked immunosorbent assay (ELISA) was validated through a series of 6 tests. The validation process for Cry1Ab extractions in soil has not yet been reported in the scientific literature. The extraction buffer and each soil matrix were tested and validated for the ELISA. Extraction efficiencies were 41%, 74%, and 89% for the 3 soil types and were significantly correlated with the organic matter content of the soil. Despite low recoveries, consistent results with low coefficients of variation allowed for accurate measurements. Through validating this method with 3 different soils, a sensitive, specific, precise, and accurate quantification of Bt Cry1Ab was developed. The validation process can be expanded and implemented in other environmental matrices, adding consistency to data across a wide range of samples.

  18. Owner-collected swabs of pets: a method fit for the purpose of zoonoses research.

    PubMed

    Möbius, N; Hille, K; Verspohl, J; Wefstaedt, P; Kreienbrock, L

    2013-09-01

    As part of the preparation of a large cohort study in the entire German population, this study examined the feasibility of cat and dog owners collecting nasal and oral swabs of their animals at home as a method of assessing exposure to zoonoses. In veterinary clinics in Hannover, Germany, 100 pet owners were recruited. Nasal and oral swabs of pets were taken by a veterinarian at the clinic and owners took swabs at home. Swabs were analysed regarding bacterial growth and compared (owner vs. vet) using Cohen's kappa and McNemar's test. The return rate of kits was 92%, and 77% of owners thought it unnecessary to have veterinarian assistance to swab the mouth. McNemar's test results: oral swabs 78% agreement with Gram-positive bacterial growth, 87% agreement with Gram-negative bacterial growth; with similar results for nasal swabs. Although sample quality differed, this method allowed the receipt of swabs from pets in order to obtain information about colonization with zoonotic pathogens.

  19. PAH detection in Quercus robur leaves and Pinus pinaster needles: A fast method for biomonitoring purpose.

    PubMed

    De Nicola, F; Concha Graña, E; Aboal, J R; Carballeira, A; Fernández, J Á; López Mahía, P; Prada Rodríguez, D; Muniategui Lorenzo, S

    2016-06-01

    Due to the complexity and heterogeneity of plant matrices, new procedure should be standardized for each single biomonitor. Thus, here is described a matrix solid-phase dispersion extraction method, previously used for moss samples, improved and modified for the analyses of PAHs in Quercus robur leaves and Pinus pinaster needles, species widely used in biomonitoring studies across Europe. The improvements compared to the previous procedure are the use of Florisil added with further clean-up sorbents, 10% deactivated silica for pine needles and PSA for oak leaves, being these matrices rich in interfering compounds, as shown by the gas chromatography-mass spectrometry analyses acquired in full scan mode. Good trueness, with values in the range 90-120% for the most of compounds, high precision (intermediate precision between 2% and 12%) and good sensitivity using only 250mg of samples (limits of quantification lower than 3 and 1.5ngg(-1), respectively for pine and oak) were achieved by the selected procedures. These methods proved to be reliable for PAH analyses and, having advantage of fastness, can be used in biomonitoring studies of PAH air contamination. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Bridging the gap between comprehensive extraction protocols in plant metabolomics studies and method validation.

    PubMed

    Bijttebier, Sebastiaan; Van der Auwera, Anastasia; Foubert, Kenn; Voorspoels, Stefan; Pieters, Luc; Apers, Sandra

    2016-09-07

    It is vital to pay much attention to the design of extraction methods developed for plant metabolomics, as any non-extracted or converted metabolites will greatly affect the overall quality of the metabolomics study. Method validation is however often omitted in plant metabolome studies, as the well-established methodologies for classical targeted analyses such as recovery optimization cannot be strictly applied. The aim of the present study is to thoroughly evaluate state-of-the-art comprehensive extraction protocols for plant metabolomics with liquid chromatography-photodiode array-accurate mass mass spectrometry (LC-PDA-amMS) by bridging the gap with method validation. Validation of an extraction protocol in untargeted plant metabolomics should ideally be accomplished by validating the protocol for all possible outcomes, i.e. for all secondary metabolites potentially present in the plant. In an effort to approach this ideal validation scenario, two plant matrices were selected based on their wide versatility of phytochemicals: meadowsweet (Filipendula ulmaria) for its polyphenols content, and spicy paprika powder (from the genus Capsicum) for its apolar phytochemicals content (carotenoids, phytosterols, capsaicinoids). These matrices were extracted with comprehensive extraction protocols adapted from literature and analysed with a generic LC-PDA-amMS characterization platform that was previously validated for broad range phytochemical analysis. The performance of the comprehensive sample preparation protocols was assessed based on extraction efficiency, repeatability and intermediate precision and on ionization suppression/enhancement evaluation. The manuscript elaborates on the finding that none of the extraction methods allowed to exhaustively extract the metabolites. Furthermore, it is shown that depending on the extraction conditions enzymatic degradation mechanisms can occur. Investigation of the fractions obtained with the different extraction methods

  1. Validation procedures for quantitative gluten ELISA methods: AOAC allergen community guidance and best practices.

    PubMed

    Koerner, Terry B; Abbott, Michael; Godefroy, Samuel Benrejeb; Popping, Bert; Yeung, Jupiter M; Diaz-Amigo, Carmen; Roberts, James; Taylor, Steve L; Baumert, Joseph L; Ulberth, Franz; Wehling, Paul; Koehler, Peter

    2013-01-01

    The food allergen analytical community is endeavoring to create harmonized guidelines for the validation of food allergen ELISA methodologies to help protect food-sensitive individuals and promote consumer confidence. This document provides additional guidance to existing method validation publications for quantitative food allergen ELISA methods. The gluten-specific criterion provided in this document is divided into sections for information required by the method developer about the assay and information for the implementation of the multilaboratory validation study. Many of these recommendations and guidance are built upon the widely accepted Codex Alimentarius definitions and recommendations for gluten-free foods. The information in this document can be used as the basis of a harmonized validation protocol for any ELISA method for gluten, whether proprietary or nonproprietary, that will be submitted to AOAC andlor regulatory authorities or other bodies for status recognition. Future work is planned for the implementation of this guidance document for the validation of gluten methods and the creation of gluten reference materials.

  2. Evaluation of a Validation Method for MR Imaging-Based Motion Tracking Using Image Simulation

    NASA Astrophysics Data System (ADS)

    Moerman, Kevin M.; Kerskens, Christian M.; Lally, Caitríona; Flamini, Vittoria; Simms, Ciaran K.

    2009-12-01

    Magnetic Resonance (MR) imaging-based motion and deformation tracking techniques combined with finite element (FE) analysis are a powerful method for soft tissue constitutive model parameter identification. However, deriving deformation data from MR images is complex and generally requires validation. In this paper a validation method is presented based on a silicone gel phantom containing contrasting spherical markers. Tracking of these markers provides a direct measure of deformation. Validation of in vivo medical imaging techniques is often challenging due to the lack of appropriate reference data and the validation method may lack an appropriate reference. This paper evaluates a validation method using simulated MR image data. This provided an appropriate reference and allowed different error sources to be studied independently and allowed evaluation of the method for various signal-to-noise ratios (SNRs). The geometric bias error was between 0-[InlineEquation not available: see fulltext.] voxels while the noisy magnitude MR image simulations demonstrated errors under 0.1161 voxels (SNR: 5-35).

  3. External Standards or Standard Additions? Selecting and Validating a Method of Standardization.

    ERIC Educational Resources Information Center

    Harvey, David

    2002-01-01

    Reports an experiment which is suitable for an introductory course in analytical chemistry and which illustrates the importance of matrix effects when selecting a method of standardization. Asserts that students learn how a spike recovery is used to validate an analytical method, and obtain practical experience in the difference between performing…

  4. Inter-laboratory validation of standardized method to determine permeability of plastic films

    USDA-ARS?s Scientific Manuscript database

    To support regulations controlling soil fumigation, we are standardizing the laboratory method we developed to measure the permeability of plastic films to fumigant vapors. The method was validated using an inter-laboratory comparison with 7 participants. Each participant evaluated the mass transfer...

  5. A Thematic Review of Interactive Whiteboard Use in Science Education: Rationales, Purposes, Methods and General Knowledge

    NASA Astrophysics Data System (ADS)

    Ormanci, Ummuhan; Cepni, Salih; Deveci, Isa; Aydin, Ozhan

    2015-10-01

    In Turkey and many other countries, the importance of the interactive whiteboard (IWB) is increasing, and as a result, projects and studies are being conducted regarding the use of the IWB in classrooms. Accordingly, in these countries, many issues are being researched, such as the IWB's contribution to the education process, its use in classroom settings and problems that occur when using the IWB. In this context, the research and analysis of studies regarding the use of the IWB have important implications for educators, researchers and teachers. This study aims to review and analyze studies conducted regarding the use of the IWB in the field of science. Accordingly, as a thematic review of the research was deemed appropriate, extant articles available in the literature were analyzed using a matrix that consisted of general features (type of journal, year and demographic properties) and content features (rationales, aims, research methods, samples, data collections, results and suggestions). According to the findings, it was concluded that the studies regarding the use of IWBs were conducted due to deficiencies in the current literature. However, there are rare studies in which the reasons for the research were associated with the nature of science education. There were also studies that focused on the effects of the IWB on student academic success and learning outcomes. Within this context, it is evident that there is a need for further research concerning the use of IWBs in science education and for studies regarding the effect of IWBs on students' skills.

  6. Normalizing surface electromyographic measures of the masticatory muscles: Comparison of two different methods for clinical purpose.

    PubMed

    Mapelli, Andrea; Tartaglia, Gianluca Martino; Connelly, Stephen Thaddeus; Ferrario, Virgilio Ferruccio; De Felicio, Claudia Maria; Sforza, Chiarella

    2016-10-01

    To compare a new normalization technique (wax pad, WAX) with the currently utilized cotton roll (COT) method in surface electromyography (sEMG) of the masticatory muscles. sEMG of the masseter and anterior temporalis muscles of 23 subjects was recorded while performing two repetitions of 5s maximum voluntary clenches (MVC) on COT and WAX. For each task, the mean value of sEMG amplitude and its coefficient of variation were calculated, and the differences between the two repetitions computed. The standard error of measurement (SEM) was calculated. For each subject and muscle, the COT-to-WAX maximum activity increment was computed. Participant preference between tasks was also recorded. WAX MVC tasks had larger maximum EMG amplitude than COT MVC tasks (P<0.001), with COT-to-WAX maximum amplitude increments of 61% (temporalis) and 94% (masseter) (P=0.006). WAX MVC had better test-retest repeatability than COT. For both MVC modalities, the mean amplitude (P>0.391) and its coefficient of variation were unchanged (P>0.180). The WAX task was the more comfortable for 18/23 subjects (P=0.007). WAX normalization ensures the same stability level of maximum EMG amplitude as COT normalization, but it is more repeatable, elicits larger maximum muscular contraction, and is felt to be more comfortable by subjects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Automated standardized pupillometry with optical method for purposes of clinical practice and research.

    PubMed

    Fotiou, F; Fountoulakis, K N; Goulas, A; Alexopoulos, L; Palikaras, A

    2000-09-01

    The aim of the current study was the introduction and standardization of two experimental conditions for dynamic pupillometry. Pupillometry is a method that can provide valuable data concerning the functioning of the autonomous nervous system. The system for recording the pupil reaction was developed in the Laboratory of Clinical Neurophysiology of the 1st Department of Neurology of Aristotle University of Thessaloniki, in co-operation with the Laboratory of Fluid Mechanics of the Aristotle University of Thessaloniki. This system is fully automated. It includes an infra-red video camera, which has the capacity to record in complete darkness, and an SLE (clinical photic stimulator) lamp. A software application automatically performed all the procedures. During the first experiment, one flash was administered. During the second experiment, a series of 25 flashes (1 Hz frequency) was administered. Fifty physically and mentally healthy subjects aged 23-48 years took part in the study. Means, standard deviations and ranges for all variables characterizing normal subjects during both experimental conditions are reported. Test/re-test results and comparisons of the two eyes are also reported. The combined use of these two experimental conditions in dynamic pupillometry may be a very useful tool in medical research. There are already reports on the usefulness of pupillometry in the research of various diseases, including depression and Alzheimer's disease. It is expected that it will also be a valuable research tool in the study of diabetes, alcoholism, myasthenia gravis, cancer, multiple sclerosis, etc.

  8. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  9. Validation of USP apparatus 4 method for microsphere in vitro release testing using Risperdal Consta.

    PubMed

    Rawat, Archana; Stippler, Erika; Shah, Vinod P; Burgess, Diane J

    2011-11-28

    The current manuscript addresses the need for a validated in vitro release testing method for controlled release parenteral microspheres. A USP apparatus 4 method was validated with the objective of possible compendial adaptation for microsphere in vitro release testing. Commercial microspheres (Risperdal Consta) were used for method validation. Accelerated and real-time release tests were conducted. The accelerated method had significantly reduced test duration and showed a good correlation with the real-time release profile (with limited number of sample analysis). Accelerated conditions were used for method validation (robustness and reproducibility). The robustness testing results revealed that release from the microspheres was not flow rate dependent and was not affected by minor variations in the method (such as cell preparation technique, amount of microspheres, flow-through cell size and size of glass beads). The significant difference in the release profile with small variations (± 0.5°C) in temperature was shown to be due to a change in risperidone catalyzed PLGA degradation in response to temperature. The accelerated method was reproducible as changing the system/equipment or the analyst did not affect the release profile. This work establishes the suitability of the modified USP apparatus 4 for possible compendial adaptation for drug release testing of microspheres.

  10. Experimental and statistical approaches in method cross-validation to support pharmacokinetic decisions.

    PubMed

    Thway, Theingi M; Ma, Mark; Lee, Jean; Sloey, Bethlyn; Yu, Steven; Wang, Yow-Ming C; Desilva, Binodh; Graves, Tom

    2009-04-05

    A case study of experimental and statistical approaches for cross-validating and examining the equivalence of two ligand binding assay (LBA) methods that were employed in pharmacokinetic (PK) studies is presented. The impact of changes in methodology based on the intended use of the methods was assessed. The cross-validation processes included an experimental plan, sample size selection, and statistical analysis with a predefined criterion of method equivalence. The two methods were deemed equivalent if the ratio of mean concentration fell within the 90% confidence interval (0.80-1.25). Statistical consideration of method imprecision was used to choose the number of incurred samples (collected from study animals) and conformance samples (spiked controls) for equivalence tests. The difference of log-transformed mean concentration and the 90% confidence interval for two methods were computed using analysis of variance. The mean concentration ratios of the two methods for the incurred and spiked conformance samples were 1.63 and 1.57, respectively. The 90% confidence limit was 1.55-1.72 for the incurred samples and 1.54-1.60 for the spiked conformance samples; therefore, the 90% confidence interval was not contained within the (0.80-1.25) equivalence interval. When the PK parameters of two studies using each of these two methods were compared, we determined that the therapeutic exposure, AUC((0-168)) and C(max), from Study A/Method 1 was approximately twice that of Study B/Method 2. We concluded that the two methods were not statistically equivalent and that the magnitude of the difference was reflected in the PK parameters in the studies using each method. This paper demonstrates the need for method cross-validation whenever there is a switch in bioanalytical methods, statistical approaches in designing the cross-validation experiments and assessing results, or interpretation of the impact of PK data.

  11. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  12. Best practices for single-laboratory validation of chemical methods for trace elements in foods. Part I--background and general considerations.

    PubMed

    Murphy, Cory J; MacNeil, James D; Capar, Stephen G

    2013-01-01

    The metals subgroup of AOAC INTERNATIONAL's Community on Chemical Contaminants and Residues in Food has been engaged for the past several years in discussions concerning the requirements for the single-laboratory validation (SLV) of methods for the determination of trace elements in foods. This paper reviews the general guidance currently available related to validation of chemical analytical methods and current typical validation practices found in publications on the analysis of elements in food and other matrixes, such as environmental and clinical samples. Based on the available guidance on SLV requirements and a review of current practices in elemental analysis, a general approach based on best practices is proposed for SLV of a method for elements in food to demonstrate the method as "fit-for-purpose."

  13. Accuracy and generalizability of using automated methods for identifying adverse events from electronic health record data: a validation study protocol.

    PubMed

    Rochefort, Christian M; Buckeridge, David L; Tanguay, Andréanne; Biron, Alain; D'Aragon, Frédérick; Wang, Shengrui; Gallix, Benoit; Valiquette, Louis; Audet, Li-Anne; Lee, Todd C; Jayaraman, Dev; Petrucci, Bruno; Lefebvre, Patricia

    2017-02-16

    Adverse events (AEs) in acute care hospitals are frequent and associated with significant morbidity, mortality, and costs. Measuring AEs is necessary for quality improvement and benchmarking purposes, but current detection methods lack in accuracy, efficiency, and generalizability. The growing availability of electronic health records (EHR) and the development of natural language processing techniques for encoding narrative data offer an opportunity to develop potentially better methods. The purpose of this study is to determine the accuracy and generalizability of using automated methods for detecting three high-incidence and high-impact AEs from EHR data: a) hospital-acquired pneumonia, b) ventilator-associated event and, c) central line-associated bloodstream infection. This validation study will be conducted among medical, surgical and ICU patients admitted between 2013 and 2016 to the Centre hospitalier universitaire de Sherbrooke (CHUS) and the McGill University Health Centre (MUHC), which has both French and English sites. A random 60% sample of CHUS patients will be used for model development purposes (cohort 1, development set). Using a random sample of these patients, a reference standard assessment of their medical chart will be performed. Multivariate logistic regression and the area under the curve (AUC) will be employed to iteratively develop and optimize three automated AE detection models (i.e., one per AE of interest) using EHR data from the CHUS. These models will then be validated on a random sample of the remaining 40% of CHUS patients (cohort 1, internal validation set) using chart review to assess accuracy. The most accurate models developed and validated at the CHUS will then be applied to EHR data from a random sample of patients admitted to the MUHC French site (cohort 2) and English site (cohort 3)-a critical requirement given the use of narrative data -, and accuracy will be assessed using chart review. Generalizability will be determined

  14. A clinically relevant validation method for implant placement after virtual planning.

    PubMed

    Verhamme, Luc M; Meijer, Gert J; Boumans, Tiny; Schutyser, Filip; Bergé, Stefaan J; Maal, Thomas J J

    2013-11-01

    To design a relevant method to compare the virtual planned implant position to the ultimately achieved implant position and to evaluate, in case of discrepancy, the cause for this. Five consecutive edentulous patients with retention problems of the upper denture received four implants in the maxilla. Preoperatively, first a cone-beam CT (CBCT) scan was acquired, followed by virtual implant planning. Then, a surgical template was designed and endosseous implants were flapless installed using the template as a guide. To inventory any differences in position, the postoperative CBCT scan was matched to the preoperative scan. The accuracy of implant placement was validated three-dimensionally (3D) and the Implant Position Orthogonal Projection (IPOP) validation method was applied to project the results to a bucco-lingual and mesio-distal plane. Subsequently, errors introduced by virtual planning, surgical instruments, and validation process were evaluated. The bucco-lingual deviations were less obvious than mesio-distal deviations. A maximum linear tip deviation of 2.84 mm, shoulder deviation of 2.42 mm, and angular deviation of 3.41° were calculated in mesio-distal direction. Deviations included errors in planning software (maximum 0.15 mm), for surgical procedure (maximum 2.94°), and validation process (maximum 0.10 mm). This study provides the IPOP validation method as an accurate method to evaluate implant positions and to elucidate inaccuracies in virtual implant planning systems. © 2012 John Wiley & Sons A/S.

  15. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing.

  16. A total organic carbon analysis method for validating cleaning between products in biopharmaceutical manufacturing.

    PubMed

    Baffi, R; Dolch, G; Garnick, R; Huang, Y F; Mar, B; Matsuhiro, D; Niepelt, B; Parra, C; Stephan, M

    1991-01-01

    The validation of cleaning procedures for biopharmaceutical products produced by recombinant DNA (rDNA) technology presents a diverse analytical challenge. This is because of the need for quantitation of a broad range of potential residual cellular components, including proteins, carbohydrates, and nucleic acids, as well as trace levels of detergents at various manufacturing stages. The validation of a Total Organic Carbon (TOC) analysis method for use in cleaning validation studies is presented. The method has a limit of detection of approximately 0.1 ppm, with a limit of quantitation of 0.5 ppm. TOC analysis has an accuracy of 50 to 70% or better in the 0.5- to 10-ppm range and demonstrates an overall variability of approximately 5%. The method is broadly applicable to a variety of impurities and contaminants that are likely to be encountered following the manufacture of rDNA products.

  17. Statistical methods and software for validation studies on new in vitro toxicity assays.

    PubMed

    Schaarschmidt, Frank; Hothorn, Ludwig A

    2014-11-01

    When a new in vitro assay method is introduced, it should be validated against the best available knowledge or a reference standard assay. For assays resulting in a simple binary outcome, the data can be displayed as a 2×2 table. Based on the estimated sensitivity and specificity, and the assumed prevalence of true positives in the population of interest, the positive and negative predictive values of the new assay can be calculated. We briefly discuss the experimental design of validation experiments and previously published methods for computing confidence intervals for predictive values. The application of the methods is illustrated for two toxicological examples, by using tools available in the free software, namely, R: confidence intervals for predictive values are computed for a validation study of an in vitro test battery, and sample size calculation is illustrated for an acute toxicity assay. The R code necessary to reproduce the results is given. 2014 FRAME.

  18. Display format, highlight validity, and highlight method: Their effects on search performance

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim D.; Obrien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. Responses to valid applications of highlight were significantly faster than responses to non or invalidly highlighted applications. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. A separate within-subject analysis of variance of display format, highlight validity, and several highlight methods did not reveal a main effect of highlight method. In addition, observed display search times were compared to search time predicted by Tullis' Display Analysis Program. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  19. Validated Method for Measuring Functional Range of Motion in Patients With Ankle Arthritis.

    PubMed

    Thornton, James; Sabah, Shiraz; Segaren, Neil; Cullen, Nicholas; Singh, Dishan; Goldberg, Andrew

    2016-08-01

    Total range of motion between the tibia and the floor is an important outcome measure following ankle surgery. However, there is wide variation in its measurement: from clinical evaluation, to radiographic metrics, and gait analysis. The purpose of this study was to present and validate a simple, standardized technique for measurement of functional total range of motion between the tibia and the floor using a digital goniometer. Institutional review board approval was obtained. Forty-six ankles from 33 participants were recruited into 2 groups: Group 1 (healthy controls) comprised 20 ankles from 10 participants. None had any musculoskeletal or neurologic pathology. Group 2 (ankle osteoarthritis) comprised 25 ankles from 23 patients. Ankle pathology had been treated with ankle arthrodesis (n = 5), total ankle replacement (n = 6), and nonoperative treatment (n = 14). Measurement was performed by 2 testers according to a standardized protocol developed for the Total Ankle Replacement Versus Arthrodesis (TARVA) randomized controlled trial. Intra- and interrater reliability was calculated using intraclass correlation coefficients (ICCs). Group 1 (healthy controls): the median difference for all measurements within an observer was 1.5 (interquartile range [IQR] 0.7-2.5) degrees, and the intraclass coefficients (ICCs) for inter- and intrarater total ankle range of motion were excellent: 0.95 (95% confidence interval [CI] 0.91-0.97, P < .001) and 0.942 (95% CI 0.859-0.977, P < .001), respectively. Group 2 (ankle osteoarthritis): the median difference for all measurements within an observer was 0.6 (IQR 0.2-1.3) degrees, and the ICCs for inter- and intrarater total ankle range of motion were excellent: 0.99 (95% CI 0.97-1.0), P < .001) and 0.99 (95% CI 0.96-1.0), P < .001), respectively. This technique provided a reliable, standardized method for measurement of total functional range of motion between the tibia and the floor. The technique required no special equipment or

  20. Novel validated spectrofluorimetric methods for the determination of taurine in energy drinks and human urine.

    PubMed

    Sharaf El Din, M K; Wahba, M E K

    2015-03-01

    Two sensitive, selective, economic and validated spectrofluorimetric methods were developed for the determination of taurine in energy drinks and spiked human urine. Method Ι is based on fluorimetric determination of the amino acid through its reaction with Hantzsch reagent to form a highly fluorescent product measured at 490 nm after excitation at 419 nm. Method ΙΙ is based on the reaction of taurine with tetracyanoethylene yielding a fluorescent charge transfer complex, which was measured at λex /em of (360 nm/450 nm). The proposed methods were subjected to detailed validation procedures, and were statistically compared with the reference method, where the results obtained were in good agreement. Method Ι was further applied to determine taurine in energy drinks and spiked human urine giving promising results. Moreover, the stoichiometry of the reactions was studied, and reaction mechanisms were postulated. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Validation of a method for the analysis of biogenic amines: histamine instability during wine sample storage.

    PubMed

    Bach, Benoit; Le Quere, Stephanie; Vuchot, Patrick; Grinbaum, Magali; Barnavon, Laurent

    2012-06-30

    This paper reports on the development of an optimized method for the simultaneous analysis of eight biogenic amines (histamine, methylamine, ethylamine, tyramine, putrescine, cadaverine, phenethylamine, and isoamylamine). The analytical method thus proposed has the following advantages: the easy derivatization of wine, the quantification of biogenic amines and a complete degradation of excess derivatization reagent during sample preparation in order to preserve the column. It consists in reversed phase separation by HPLC and UV-vis detection of the aminoenones formed by the reaction of amino compounds with the derivatization reagent diethyl ethoxymethylenemalonate (DEEMM). The usefulness of this technique was confirmed by an alternative oenological analytical method for the validation, quality control and uncertainty assessment (OIV Oeno 10/2005). The method was validated and proposed as a reference method to the International Organization of Vine and Wine (OIV). As a specific application of the proposed method, the biogenic amine content of Rhône valley wines was investigated.

  2. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  3. Validated high-performance liquid chromatographic method for quantitation of neohesperidine dihydrochalcone in foodstuffs.

    PubMed

    Montijano, H; Borrego, F; Canales, I; Tomás-Barberán, F A

    1997-01-10

    An analytical method to detect and quantitate neohesperidine dihydrochalcone in foodstuffs has been developed and validated in soft-drink applications. The method was shown to be sufficiently precise, accurate, selective and rugged in quantitating neohesperidine DC both at flavouring (1-5 mg/kg) and sweetening (5-50 mg/kg) levels. Applications of the method to determine neohesperidine DC in foodstuffs other than soft-drinks is also described.

  4. Skeletal age estimation for forensic purposes: A comparison of GP, TW2 and TW3 methods on an Italian sample.

    PubMed

    Pinchi, Vilma; De Luca, Federica; Ricciardi, Federico; Focardi, Martina; Piredda, Valentina; Mazzeo, Elena; Norelli, Gian-Aristide

    2014-05-01

    Paediatricians, radiologists, anthropologists and medico-legal specialists are often called as experts in order to provide age estimation (AE) for forensic purposes. The literature recommends performing the X-rays of the left hand and wrist (HW-XR) for skeletal age estimation. The method most frequently employed is the Greulich and Pyle (GP) method. In addition, the so-called bone-specific techniques are also applied including the method of Tanner Whitehouse (TW) in the latest versions TW2 and TW3. To compare skeletal age and chronological age in a large sample of children and adolescents using GP, TW2 and TW3 methods in order to establish which of these is the most reliable for forensic purposes. The sample consisted of 307 HW-XRs of Italian children or adolescents, 145 females and 162 males aged between 6 and 20 years. The radiographies were scored according to the GP, TW2RUS and TW3RUS methods by one investigator. The results' reliability was assessed using intraclass correlation coefficient. Wilcoxon signed-rank test and Student t-test were performed to search for significant differences between skeletal and chronological ages. The distributions of the differences between estimated and chronological age, by means of boxplots, show how median differences for TW3 and GP methods are generally very close to 0. Hypothesis tests' results were obtained, with respect to the sex, both for the entire group of individuals and people grouped by age. Results show no significant differences among estimated and chronological age for TW3 and, to a lesser extent, GP. The TW2 proved to be the worst of the three methods. Our results support the conclusion that the TW2 method is not reliable for AE for forensic purpose. The GP and TW3 methods have proved to be reliable in males. For females, the best method was found to be TW3. When performing forensic age estimation in subjects around 14 years of age, it could be advisable to use and associate the TW3 and GP methods. Copyright

  5. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1313 and Method 1316

    EPA Science Inventory

    This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...

  6. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1313 and Method 1316

    EPA Science Inventory

    This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...

  7. Validity and Feasibility of a Digital Diet Estimation Method for Use with Preschool Children: A Pilot Study

    ERIC Educational Resources Information Center

    Nicklas, Theresa A.; O'Neil, Carol E.; Stuff, Janice; Goodell, Lora Suzanne; Liu, Yan; Martin, Corby K.

    2012-01-01

    Objective: The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in "Head Start." Methods: Preschool children and their caregivers participated in validation (n = 22) and feasibility (n = 24) pilot studies. Validity was determined in the metabolic research unit using…

  8. Validity and Feasibility of a Digital Diet Estimation Method for Use with Preschool Children: A Pilot Study

    ERIC Educational Resources Information Center

    Nicklas, Theresa A.; O'Neil, Carol E.; Stuff, Janice; Goodell, Lora Suzanne; Liu, Yan; Martin, Corby K.

    2012-01-01

    Objective: The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in "Head Start." Methods: Preschool children and their caregivers participated in validation (n = 22) and feasibility (n = 24) pilot studies. Validity was determined in the metabolic research unit using…

  9. The development, content validity and inter-rater reliability of the SMART-Goal Evaluation Method: A standardised method for evaluating clinical goals.

    PubMed

    Bowman, Julia; Mogensen, Lise; Marsland, Elisabeth; Lannin, Natasha

    2015-12-01

    Goal setting is a complex skill. The use of formal goal writing procedures (including the use of the SMART goal model) has been advocated. However, a standardised method of writing and evaluating SMART goals is currently lacking. This study comprised of two phases. The aims of phase one was to (i) develop the SMART Goal Evaluation Method (SMART-GEM) based on a SMART goal model; and (ii) investigate the content validity of the SMART-GEM. The aim of phase two of the study was to test the inter-rater reliability of the SMART-GEM. Development of the SMART- GEM involved defining and constructing evaluation criteria suitable for auditing goal statements. A content validity assessment was conducted using an expert panel of Occupational Therapists (n = 10). Inter-rater reliability of the SMART-GEM was examined using a purposive sample of multiple raters (n = 24). The SMART- GEM was rated as having good content validity (individual items CVI ranged from 0.90 to 1.00; total SMART- GEM CVI = 0.99, ρ = 0.05). Agreement between raters on individual items ranged from poor (κ = 0.254) to excellent (κ = 0.965) and agreement of overall grades was fair to good (κ = 0.582). Inter-rater agreement on total scores was found to be very good (ICC = 0.895, 95% CI = 0.743 to 0.986, ρ = 0.001) with excellent internal consistency (α = 0.995). The SMART-GEM demonstrated good construct validity and very good inter-rater reliability on total score and shows promise as a standardised method to writing and evaluating clinical goals. © 2015 Occupational Therapy Australia.

  10. Best Practices in Stability Indicating Method Development and Validation for Non-clinical Dose Formulations.

    PubMed

    Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W

    2016-11-01

    Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.

  11. Validation of three new methods for determination of metal emissions using a modified Environmental Protection Agency Method 301.

    PubMed

    Yanca, Catherine A; Barth, Douglas C; Petterson, Krag A; Nakanishi, Michael P; Cooper, John A; Johnsen, Bruce E; Lambert, Richard H; Bivins, Daniel G

    2006-12-01

    Three new methods applicable to the determination of hazardous metal concentrations in stationary source emissions were developed and evaluated for use in U.S. Environmental Protection Agency (EPA) compliance applications. Two of the three independent methods, a continuous emissions monitor-based method (Xact) and an X-ray-based filter method (XFM), are used to measure metal emissions. The third method involves a quantitative aerosol generator (QAG), which produces a reference aerosol used to evaluate the measurement methods. A modification of EPA Method 301 was used to validate the three methods for As, Cd, Cr, Pb, and Hg, representing three hazardous waste combustor Maximum Achievable Control Technology (MACT) metal categories (low volatile, semivolatile, and volatile). The modified procedure tested the methods using more stringent criteria than EPA Method 301; these criteria included accuracy, precision, and linearity. The aerosol generation method was evaluated in the laboratory by comparing actual with theoretical aerosol concentrations. The measurement methods were evaluated at a hazardous waste combustor (HWC) by comparing measured with reference aerosol concentrations. The QAG, Xact, and XFM met the modified Method 301 validation criteria. All three of the methods demonstrated precisions and accuracies on the order of 5%. In addition, correlation coefficients for each method were on the order of 0.99, confirming the methods' linear response and high precision over a wide range of concentrations. The measurement methods should be applicable to emissions from a wide range of sources, and the reference aerosol generator should be applicable to additional analytes. EPA recently approved an alternative monitoring petition for an HWC at Eli Lilly's Tippecanoe site in Lafayette, IN, in which the Xact is used for demonstrating compliance with the HWC MACT metal emissions (low volatile, semivolatile, and volatile). The QAG reference aerosol generator was approved as

  12. Validation of a screening method for rapid control of macrocyclic lactone mycotoxins in maize flour samples.

    PubMed

    Zougagh, Mohammed; Téllez, Helena; Sánchez, Alberto; Chicharro, Manuel; Ríos, Angel

    2008-05-01

    A procedure for the analytical validation of a rapid supercritical fluid extraction amperometric screening method for controlling macrocyclic lactone mycotoxins in maize flour samples has been developed. The limit established by European legislation (0.2 mg kg(-1)), in reference to zearalenone (ZON) mycotoxin, was taken as the reference threshold to validate the proposed method. Natural ZON metabolites were also included in this study to characterize the final screening method. The objective was the reliable classification of samples as positive or negative samples. The cut-off level was fixed at a global concentration of mycotoxins of 0.17 mg kg(-1). An expanded unreliability zone between 0.16 and 0.23 mg kg(-1) characterized the screening method for classifying the samples. A set of 30 samples was used for the final demonstration of the reliability and usefulness of the method.

  13. UV/VIS spectrophotometric methods for determination of caffeine and phenylephrine hydrochloride in complex pharmaceutical preparations. Validation of the methods.

    PubMed

    Muszalska, I; Zajac, M; Wróbel, G; Nogowska, M

    2000-01-01

    The contents of active substances were determined in a preparation TP-4 (tablets) containing paracetamol, ascorbic acid, caffeine and phenylephrine hydrochloride. For determination of caffeine and phenylephrine hydrochloride, UV/VIS spectrophotometric method was used. The VIS spectrophotometric method based on the reaction of phenylephrine with ninhydrine in sulphuric acid (1.127 kg/l). Validation of methods performed for model mixtures proved those methods were accurate, precise, repeatable and linear in the range from 50% to 150% of the amount declared in the preparation. The content of caffeine and phenylephrine hydrochloride in TP-4, Thompyrin, Panadol Extra, Ring N satisfies the FP V demands.

  14. Reference method for detection of Pgp mediated multidrug resistance in human hematological malignancies: a method validated by the laboratories of the French Drug Resistance Network.

    PubMed

    Huet, S; Marie, J P; Gualde, N; Robert, J

    1998-12-15

    Multidrug resistance (MDR) associated with overexpression of the MDR1 gene and of its product, P-glycoprotein (Pgp), plays an important role in limiting cancer treatment efficacy. Many studies have investigated Pgp expression in clinical samples of hematological malignancies but failed to give definitive conclusion on its usefulness. One convenient method for fluorescent detection of Pgp in malignant cells is flow cytometry which however gives variable results from a laboratory to another one, partly due to the lack of a reference method rigorously tested. The purpose of this technical note is to describe each step of a reference flow cytometric method. The guidelines for sample handling, staining and analysis have been established both for Pgp detection with monoclonal antibodies directed against extracellular epitopes (MRK16, UIC2 and 4E3), and for Pgp functional activity measurement with Rhodamine 123 as a fluorescent probe. Both methods have been validated on cultured cell lines and clinical samples by 12 laboratories of the French Drug Resistance Network. This cross-validated multicentric study points out crucial steps for the accuracy and reproducibility of the results, like cell viability, data analysis and expression.

  15. Validated spectrofluorimetric method for the determination of clonazepam in pharmaceutical preparations.

    PubMed

    Ibrahim, Fawzia; El-Enany, Nahed; Shalan, Shereen; Elsharawy, Rasha

    2016-05-01

    A simple, highly sensitive and validated spectrofluorimetric method was applied in the determination of clonazepam (CLZ). The method is based on reduction of the nitro group of clonazepam with zinc/CaCl2, and the product is then reacted with 2-cyanoacetamide (2-CNA) in the presence of ammonia (25%) yielding a highly fluorescent product. The produced fluorophore exhibits strong fluorescence intensity at ʎ(em) = 383 nm after excitation at ʎ(ex) = 333 nm. The method was rectilinear over a concentration range of 0.1-0.5 ng/mL with a limit of detection (LOD) of 0.0057 ng/mL and a limit of quantification (LOQ) of 0.017 ng/mL. The method was fully validated and successfully applied to the determination of CLZ in its tablets with a mean percentage recovery of 100.10 ± 0.75%. Method validation according to ICH Guidelines was evaluated. Statistical analysis of the results obtained using the proposed method was successfully compared with those obtained using a reference method, and there was no significance difference between the two methods in terms of accuracy and precision.

  16. Validating methods for estimating endocranial volume in individual red deer (Cervus elaphus).

    PubMed

    Logan, Corina J; Clutton-Brock, Tim H

    2013-01-01

    Comparing brain sizes is a key method in comparative cognition and evolution. Brain sizes are commonly validated by interspecific comparisons involving animals of varying size, which does not provide a realistic index of their accuracy for intraspecific comparisons. Intraspecific validation of methods for measuring brain size should include animals of the same age and sex to ensure that individual differences can be detected in animals of similar size. In this study we compare three methods of measuring the endocranial volume of 33 red deer skulls to investigate the accuracy of each method. Methods for estimating endocranial volume included scanning each skull using computerised tomography (CT) and quantifying the volume with OsiriX software, filling the cranium with glass beads and measuring the bead volume, and linear measurements (length, width, and height) of the cranium using callipers. CT scan volumes were highly correlated with results from the bead method, but only moderately correlated with the linear method. This study illustrates the importance of validating intraspecies measurement methods, which allows for the accurate interpretation of results. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.

    PubMed

    Ng, Lauren C; Kanyanganzi, Frederick; Munyanah, Morris; Mushashi, Christine; Betancourt, Theresa S

    2014-01-01

    This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R). Qualitative free listing (n = 74) and key informant interviews (n = 47) identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.

  18. Determining an appropriate method for the purpose of land allocation for ecotourism development (case study: Taleghan County, Iran).

    PubMed

    Aliani, H; Kafaky, S Babaie; Saffari, A; Monavari, S M

    2016-11-01

    Appropriate management and planning of suitable areas for the development of ecotourism activities can play an important role in ensuring proper use of the environment. Due to the complexity of nature, applying different tools and models-particularly multi-criteria methods-can be useful in order to achieve these goals. In this study, to indicate suitable areas (land allocation) for ecotourism activities in Taleghan county, weighted linear combination (WLC) using geographical information system (GIS), fuzzy logic, and analytical network process (ANP) were used. To compare the applicability of each of these methods in achieving the goal, the results were compared with the previous model presented by Makhdoum. The results showed that the WLC and ANP methods are more efficient than the Makhdoum model in allocating lands for recreational areas and ecotourism purposes since concomitant use of fuzzy logic and ANP for ranking and weighing the criteria provides us with more flexible and logical conditions. Furthermore, the mentioned method makes it possible to involve ecological, economic, and social criteria simultaneously in the evaluation process in order to allocate land for ecotourism purposes.

  19. Validation of a three-dimensional hand scanning and dimension extraction method with dimension data.

    PubMed

    Li, Zhizhong; Chang, Chien-Chi; Dempsey, Patrick G; Ouyang, Lusha; Duan, Jiyang

    2008-11-01

    A three-level experiment was developed to validate a 3-D hand scanning and dimension extraction method with dimension data. At the first level, a resin hand model of a participant was fabricated to test the repeatability of the dimension data obtained by the 3-D method. At the second level, the actual hand of that participant was measured repeatedly using both the 3-D method and the traditional manual measurement method. The repeatability for both methods was investigated and compared. The influence of posture keeping, surface deformation and other human issues were also examined on the second level. At the third level, a group of participants were recruited and their hands were measured using both methods to examine any differences between the two methods on statistical descriptives. Significant differences, which varied among dimension types (length, depth/breadth, and circumference), were found between the 3-D method and the traditional method. 3-D anthropometric measurement and dimension extraction has become a prospective technology. The proposed three-level experiment provides a systematic method for validation of the repeatability of a 3-D method and compatibility between dimension data from a 3-D method and a traditional method.

  20. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    NASA Astrophysics Data System (ADS)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize

  1. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... Procedure for EPA Waste and Wastewater Methods 1. Applicability This procedure is to be applied exclusively.... For the purposes of this appendix, “waste” means waste and wastewater. 2. Procedure This...

  2. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... Procedure for EPA Waste and Wastewater Methods 1. Applicability This procedure is to be applied exclusively.... For the purposes of this appendix, “waste” means waste and wastewater. 2. Procedure This...

  3. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... Procedure for EPA Waste and Wastewater Methods 1. Applicability This procedure is to be applied exclusively.... For the purposes of this appendix, “waste” means waste and wastewater. 2. Procedure This...

  4. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... Procedure for EPA Waste and Wastewater Methods 1. Applicability This procedure is to be applied exclusively.... For the purposes of this appendix, “waste” means waste and wastewater. 2. Procedure This...

  5. Development and Validation of a Set of Palliative Medicine Entrustable Professional Activities: Findings from a Mixed Methods Study.

    PubMed

    Myers, Jeff; Krueger, Paul; Webster, Fiona; Downar, James; Herx, Leonie; Jeney, Christa; Oneschuk, Doreen; Schroder, Cori; Sirianni, Giovanna; Seccareccia, Dori; Tucker, Tara; Taniguchi, Alan

    2015-08-01

    Entrustable professional activities (EPAs) are routine tasks considered essential to a professional practice. An EPA can serve as a performance-based outcome that a clinical supervisor would progressively entrust a learner to perform. Our aim was to identify, develop, and validate a set of EPAs for the palliative medicine discipline. The design was a sequential qualitative and quantitative mixed methods study. A working group was convened to develop a set of EPAs. Focus groups and surveys were used for validation purposes. Palliative medicine educators and content experts from across Canada participated in both the working group as well as the focus groups. Attendees of the 2014 Canadian Society of Palliative Care Physicians (CSPCP) annual conference completed surveys. A questionnaire was used to collect survey participant sociodemographic, clinical, and academic information along with ratings of the importance of the EPAs individually and collectively. Cronbach's alpha examined internal consistency of the set of EPAs. Focus group participants strongly endorsed the 12 EPAs. Virtually all survey participants rated the individual EPAs as being "fairly/very important" (range 94% to 100%). Of the participants, 97% agreed that residents able to perform the set of EPAs would be practicing palliative medicine and 87% indicated strong agreement that this collective set of EPAs captures activities that all palliative medicine physicians must be able to perform. A Cronbach's alpha of 0.841 confirmed good internal consistency. Near uniform agreement from a national group of palliative medicine physicians provides strong validation for the set of 12 EPAs.

  6. Analytical method development for the determination of eight biocides in various environmental compartments and application for monitoring purposes.

    PubMed

    Wluka, Ann-Kathrin; Rüdel, Heinz; Pohl, Korinna; Schwarzbauer, Jan

    2016-11-01

    The main objective of this study was the development of simple multi-parameter methods for the analyses of biocides in various environmental matrices (wastewater, surface water, and sewage sludge) for measurement and monitoring activities. Eight target substances (triclosan, methyltriclosan (transformation product of triclosan), cybutryne (Irgarol), and the azole fungicides propiconazole, tebuconazole, imazalil, thiabendazole, and cyproconazole) were chosen for determination in selected sample sets. For surface water and wastewater samples a solid-phase extraction (SPE) method and for sewage sludge samples an accelerated solvent extraction (ASE) were developed. The extracts were analyzed by gas chromatographic-mass spectrometric methods (GC/MS), and the analytical methods were checked to ensure sufficient sensitivity by comparing the limits of quantification (LOQs) to the predicted no effect concentrations (PNECs) of the selected biocides. For quality control, recovery rates were determined. Finally, developed methods were checked and validated by application on sample material from various matrices. Sampling took place in seven urban wastewater treatment plants and their corresponding receiving waters. The results revealed that the developed extraction methods are effective and simple and allow the determination of a broad range of biocides in various environmental compartments.

  7. Using non-linear methods to investigate the criterion validity of traffic-psychological test batteries.

    PubMed

    Risser, R; Chaloupka, Ch; Grundler, W; Sommer, M; Häusler, J; Kaufmann, C

    2008-01-01

    In several countries in Europe (among others Germany and Austria) persons who have lost their drivers licence have to undergo a psychological test in order to regain their licence. The article discusses the validity of two test batteries of the Expert System Traffic using standardized driving tests [Schuhfried, G., 2005. Manual Expert System Traffic (XPSV). Schuhfried GmbH, Mödling]. A global evaluation of the respondents' performance in a standardized driving test was used as criterion measure in order to divide the subjects into drivers, who were classified as relatively safe or unsafe according to their performance in a standardized driving test. Artificial neural networks were used to calculate the criterion validity. This yielded superior classification rates and validity coefficients compared to classical multivariate methods such as a logistic regression. The stability and generalizability of the results was empirically demonstrated using a jack-knife validation, an internal bootstrap validation and an independent validation sample which completed the test batteries and the standardized driving test as part of a so-called traffic-psychological assessment which is compulsory in Austria in all cases, where the driver's licence has been withdrawn, e.g., when caught driving under the influence of alcohol. Moreover, the procedure allows calculating incremental validities which enable the assessment of the relative importance of the individual predictor variables. This contributes to the transparency of the results obtained with artificial neural networks. In summary it can be said that the results provide empirical evidence of the validity of the traffic-psychological tests batteries used in this study. The practical implications of the results for traffic-psychological assessment are described.

  8. Methods for the specification and validation of geolocation accuracy and predicted accuracy

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Carr, Jacqueline

    2017-05-01

    The specification of geolocation accuracy requirements and their validation is essential for the proper performance of a Geolocation System and for trust in resultant three dimensional (3d) geolocations. This is also true for predicted accuracy requirements and their validation for a Geolocation System, which assumes that each geolocation produced (extracted) by the system is accompanied by an error covariance matrix that characterizes its specific predicted accuracy. The extracted geolocation and its error covariance matrix are standard outputs of (near) optimal estimators, either associated (internally) with the Geolocation System itself, or with a "downstream" application that inputs a subset of Geolocation System output, such as sensor data/metadata: for example, a set of images and corresponding metadata of the imaging sensor's pose and its predicted accuracy. This output allows for subsequent (near) optimal extraction of geolocations and associated error covariance matrices based on the application's measurements of pixel locations in the images corresponding to objects of interest. This paper presents recommended methods and detailed equations for the specification and validation of both accuracy and predicted accuracy requirements for a general Geolocation System. The specification/validation of accuracy requirements are independent from the specification/validation of predicted accuracy requirements. The methods presented in this paper are theoretically rigorous yet practical.

  9. A validation framework for microbial forensic methods based on statistical pattern recognition

    SciTech Connect

    Velsko, S P

    2007-11-12

    This report discusses a general approach to validating microbial forensic methods that attempt to simultaneously distinguish among many hypotheses concerning the manufacture of a questioned biological agent sample. It focuses on the concrete example of determining growth medium from chemical or molecular properties of a bacterial agent to illustrate the concepts involved.

  10. Validation Specimen for Contour Method Extension to Multiple Residual Stress Components

    SciTech Connect

    Pagliaro, Pierluigi; Prime, Michael B; Zuccarello, B; Clausen, Bjorn; Watkins, Thomas R

    2007-01-01

    A new theoretical development of the contour method, that allow the user to measure the three normal residual stress components on cross sections of a generic mechanical part, is presented. To validate such a theoretical development, a residual stress test specimen was properly designed, fabricated and then tested with different experimental techniques.

  11. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    SciTech Connect

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.; Higdon, David Mitchell; Rowe, Charlotte Anne; Allen, Richard M.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  12. Alternative method to validate the seasonal land cover regions of the conterminous United States

    Treesearch

    Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan

    1996-01-01

    An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...

  13. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  14. Validity of a Simulation Game as a Method for History Teaching

    ERIC Educational Resources Information Center

    Corbeil, Pierre; Laveault, Dany

    2011-01-01

    The aim of this research is, first, to determine the validity of a simulation game as a method of teaching and an instrument for the development of reasoning and, second, to study the relationship between learning and students' behavior toward games. The participants were college students in a History of International Relations course, with two…

  15. Validated HPTLC Method for Simultaneous Determination of Quinapril Hydrochloride and Hydrochlorothiazide in a Tablet Dosage Form

    PubMed Central

    Bhavar, Girija B.; Chatpalliwar, V. A.; Patil, D. D.; Surana, S. J.

    2008-01-01

    Quinapril hydrochloride and hydrochlorothiazide were simultaneously determined by HPTLC in pharmaceutical formulations. The drugs were separated on silica gel 60 F254 plates using suitable combination of solvents as mobile phase. The validation parameters, tested in accordance with the requirements of ICH guidelines, prove the suitability of methods. PMID:20046789

  16. Validation of a new method for quantification of ammonia volatilization from agricultural field plots

    USDA-ARS?s Scientific Manuscript database

    A low cost method of atmospheric ammonia (NH3) concentration was developed and validated for use in static chambers. This technique utilizes glass tubes coated with oxalic acid to adsorb NH3 from the air. The advantage of this procedure is that it can be used to quantify NH3 emissions from field p...

  17. Assessment of management in general practice: validation of a practice visit method.

    PubMed Central

    van den Hombergh, P; Grol, R; van den Hoogen, H J; van den Bosch, W J

    1998-01-01

    BACKGROUND: Practice management (PM) in general practice is as yet ill-defined; a systematic description of its domain, as well as a valid method to assess it, are necessary for research and assessment. AIM: To develop and validate a method to assess PM of general practitioners (GPs) and practices. METHOD: Relevant and potentially discriminating indicators were selected from a systematic framework of 2410 elements of PM to be used in an assessment method (VIP = visit instrument PM). The method was first tested in a pilot study and, after revision, was evaluated in order to select discriminating indicators and to determine validity of dimensions (factor and reliability analysis, linear regression). RESULTS: One hundred and ten GPs were assessed with the practice visit method using 249 indicators; 208 of these discriminated sufficiently at practice level or at GP level. Factor analysis resulted in 34 dimensions and in a taxonomy of PM. Dimensions and indicators showed marked variation between GPs and practices. Training practices scored higher on five dimensions; single-handed and dispensing practices scored lower on delegated tasks, but higher on accessibility and availability. CONCLUSION: A visit method to assess PM has been developed and its validity studied systematically. The taxonomy and dimensions of PM were in line with other classifications. Selection of a balanced number of useful and relevant indicators was nevertheless difficult. The dimensions could discriminate between groups of GPs and practices, establishing the value of the method for assessment. The VIP method could be an important contribution to the introduction of continuous quality improvement in the profession. PMID:10198481

  18. Validated spectrophotometric methods for the simultaneous determination of telmisartan and atorvastatin in bulk and tablets

    PubMed Central

    Ilango, Kaliappan; Kumar, Pushpangadhan S. Shiji

    2012-01-01

    Aim: Three simple, accurate, and reproducible spectrophotometric methods have been developed and validated for simultaneous estimation of telmisartan (TELM) atorvastatin (ATV) in combined tablet dosage form. Materials and Methods: The first method is based on first-order derivative spectroscopy. The sampling wavelengths were 223 nm (zero crossing of TELM) where ATV showed considerable absorbance and 272 nm (zero crossing of ATV) where TELM showed considerable absorbance. The second method Q-analysis (absorbance ratio), involves formation of Q-absorbance equation using respective absorptivity values at 280.9 nm (isobestic point) and 296.0 nm (λmax of TELM). The third method involves determination using multicomponent mode method; sampling wavelengths selected were 296.0 and 246.9 nm. Results: TELM and ATV followed linearity in the concentration range of 5–40 and 4–32 μg/ml for method I, 5–30 μg/ml and 2–24 μg/ml for method II and III, respectively. Mean recoveries for all three methods were found satisfactory. All methods were validated according to International Conference on Harmonization Q2B guidelines. Conclusion: The developed methods are simple, precise, rugged, and economical. The utility of methods has been demonstrated by analysis of commercially available tablet dosage form. PMID:23781490

  19. Cleaning validation 1: development and validation of a chromatographic method for the detection of traces of LpHse detergent.

    PubMed

    Zayas, José; Colón, Héctor; Garced, Omar; Ramos, Leyda M

    2006-05-03

    A high performance liquid chromatography (HPLC) method for the detection of traces of LpHse (4-tert-amylphenol and 2-phenylphenol) has been developed and validated. The method was shown to be linear in the range from 0.5 to 10.00 ppm in solution. The method was also shown to be accurate with a recovery of up to 95% by area response for amylphenol and up to 94% by area response for phenylphenol from metal surfaces (4''x4'' un-polished 304 stainless steel plates) by means of swab material. The reproducibility of the method was determined to be 1.61% by area response and 1.52% by height response for amylphenol and 5.40% by area response and 13.77% by height response for phenylphenol from solutions reported as the pooled relative standard deviation. The developed method was also shown to be rugged by comparisons of different preparations by different analysts. The limit of detection was established to be 0.076 ppm by peak area, 0.079 ppm by peak height for amylphenol and 0.34 ppm by peak area, 0.82 ppm by peak height for phenylphenol from solution, and 1.77 ppb by peak area, 1.23 ppm by peak height for amylphenol and 1.23 ppm by peak area, 1.44 ppm by peak height for phenylphenol from recovery from metal studies. The limit of quantitation was established to be 0.25 ppm by peak area, 0.26 ppm by peak height for amylphenol and 1.14 ppm by peak area, 2.73 ppm by peak height for phenylphenol from solution, and 3.89 ppm by peak area, 4.11 ppm by peak height for amylphenol and 4.11 ppm by peak area, 4.79 ppm by peak height for phenylphenol from recovery from metal plates studies. This method can be employed to determine the presence of LpHse residues in cleaned equipments where the detergent was used.

  20. Reconstruction of stented coronary arteries from optical coherence tomography images: Feasibility, validation, and repeatability of a segmentation method

    PubMed Central

    Bologna, Marco; Migliori, Susanna; Aurigemma, Cristina; Burzotta, Francesco; Celi, Simona; Dubini, Gabriele; Migliavacca, Francesco; Mainardi, Luca

    2017-01-01

    Optical coherence tomography (OCT) is an established catheter-based imaging modality for the assessment of coronary artery disease and the guidance of stent placement during percutaneous coronary intervention. Manual analysis of large OCT datasets for vessel contours or stent struts detection is time-consuming and unsuitable for real-time applications. In this study, a fully automatic method was developed for detection of both vessel contours and stent struts. The method was applied to in vitro OCT scans of eight stented silicone bifurcation phantoms for validation purposes. The proposed algorithm comprised four main steps, namely pre-processing, lumen border detection, stent strut detection, and three-dimensional point cloud creation. The algorithm was validated against manual segmentation performed by two independent image readers. Linear regression showed good agreement between automatic and manual segmentations in terms of lumen area (r>0.99). No statistically significant differences in the number of detected struts were found between the segmentations. Mean values of similarity indexes were >95% and >85% for the lumen and stent detection, respectively. Stent point clouds of two selected cases, obtained after OCT image processing, were compared to the centerline points of the corresponding stent reconstructions from micro computed tomography, used as ground-truth. Quantitative comparison between the corresponding stent points resulted in median values of ~150 μm and ~40 μm for the total and radial distances of both cases, respectively. The repeatability of the detection method was investigated by calculating the lumen volume and the mean number of detected struts per frame for seven repeated OCT scans of one selected case. Results showed low deviation of values from the median for both analyzed quantities. In conclusion, this study presents a robust automatic method for detection of lumen contours and stent struts from OCT as supported by focused validation

  1. Validity and reliability of an alternative method for measuring power output during six-second all-out cycling.

    PubMed

    Watson, Martin; Bibbo, Daniele; Duffy, Charles R; Riches, Philip E; Conforto, Silvia; Macaluso, Andrea

    2014-08-01

    In a laboratory setting where both a mechanically-braked cycling ergometer and a motion analysis (MA) system are available, flywheel angular displacement can be estimated by using MA. The purpose of this investigation was to assess the validity and reliability of a MA method for measuring maximal power output (Pmax) in comparison with a force transducer (FT) method. Eight males and eight females undertook three identical sessions, separated by 4 to 6 days; the first being a familiarization session. Individuals performed three 6-second sprints against 50% of the maximal resistance to complete two pedal revolutions with a 3-minute rest between trials. Power was determined independently using both MA and FT analyses. Validity: MA recorded significantly higher Pmax than FT (P < .05). Bland-Altman plots showed that there was a systematic bias in the difference between the measures of the two systems. This difference increased as power increased. Repeatability: Intraclass correlation coefficients were on average 0.90 ± 0.05 in males and 0.85 ± 0.08 in females. Measuring Pmax by MA, therefore, is as appropriate for use in exercise physiology research as Pmax measured by FT, provided that a bias between these measurements methods is allowed for.

  2. A multiscale finite element model validation method of composite cable-stayed bridge based on Probability Box theory

    NASA Astrophysics Data System (ADS)

    Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan

    2016-05-01

    Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.

  3. Development and validation of a novel RP-HPLC method for the analysis of reduced glutathione.

    PubMed

    Sutariya, Vijaykumar; Wehrung, Daniel; Geldenhuys, Werner J

    2012-03-01

    The objective of this study was the development, optimization, and validation of a novel reverse-phase high-pressure liquid chromatography (RP-HPLC) method for the quantification of reduced glutathione in pharmaceutical formulations utilizing simple UV detection. The separation utilized a C18 column at room temperature and UV absorption was measured at 215 nm. The mobile phase was an isocratic flow of a 50/50 (v/v) mixture of water (pH 7.0) and acetonitrile flowing at 1.0 mL/min. Validation of the method assessed the methods ability in seven categories: linearity, range, limit of detection, limit of quantification, accuracy, precision, and selectivity. Analysis of the system suitability showed acceptable levels of suitability in all categories. Likewise, the method displayed an acceptable degree of linearity (r(2) = 0.9994) over a concentration range of 2.5-60 µg/mL. The detection limit and quantification limit were 0.6 and 1.8 µg/mL respectively. The percent recovery of the method was 98.80-100.79%. Following validation the method was employed in the determination of glutathione in pharmaceutical formulations in the form of a conjugate and a nanoparticle. The proposed method offers a simple, accurate, and inexpensive way to quantify reduced glutathione.

  4. Validation of methods used in the Florida Department of Agricultural and Consumer Services' Chemical Residue Laboratory.

    PubMed

    Parker, G A

    1991-01-01

    Very few methods for detecting residues of pesticides in food or agricultural samples have undergone rigorous colloborative study and possess official AOAC status. The Chemical Residue Laboratory has formalized a method validation scheme to use when incorporating or developing new, unofficial methods. These methods are validated by assessing certain performance parameters: scope, specificity, linear range, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). For accuracy and precision assessment, 12 replicate fortifications must yield recoveries within the range of 70-120% with a coefficient of variation (CV) that compares favorably to the Horwitz CV. LOD and LOQ are equivalent to 3 and 10 times, respectively, the background signal contributed by a sample matrix blank. This criterion that we use for LOD/LOQ is not universal. In fact, because of differing definitions, we have encountered difficulties in enforcing a tolerance by using a registrant's method. This paper also presents an example of our method validation scheme, using a recent method development project for detecting sulfamethazine in raw milk. The sulfamethazine project also revealed unanticipated personnel problems, underscoring the importance of the human factor in quality assurance.

  5. Evaluation Protocol for Review of Method Validation Data by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals Expert Review Panel.

    PubMed

    Gill, Brendon D; Indyk, Harvey E; Blake, Christopher J; Konings, Erik J M; Jacobs, Wesley A; Sullivan, Darryl M

    2015-01-01

    Methods under consideration as part of the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals process are to be evaluated against a set of Standard Method Performance RequirementsSM (SMPRs) via peer review by an expert review panel (ERP). A validation protocol and a checklist have been developed to assist the ERP to evaluate experimental data and to compare multiple candidate methods for each nutrient. Method performance against validation parameters mandated in the SMPRs as well as additional criteria are to be scored, with the method selected by the ERP proceeding to multilaboratory study prior to Final Action approval. These methods are intended to be used by the infant formula industry for the purposes of dispute resolution.

  6. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  7. GC Method Validation for the Analysis of Menthol in Suppository Pharmaceutical Dosage Form.

    PubMed

    Abualhasan, Murad N; Zaid, Abdel Naser; Jaradat, Nidal; Mousa, Ayman

    2017-01-01

    Menthol is widely used as a fragrance and flavor in the food and cosmetic industries. It is also used in the medical and pharmaceutical fields for its various biological effects. Gas chromatography (GC) is considered to be a sensitive method for the analysis of menthol. GC chromatographic separation was developed using capillary column (VF-624) and a flame ionization detector (FID). The method was validated as per ICH guidelines for various parameters such as precision, linearity, accuracy, solution stability, robustness, limit of detection, and quantification. The tested validation parameters were found to be within acceptable limits. The method was successfully applied for the quantification of menthol in suppositories formulations. Quality control departments and official pharmacopeias can use our developed method in the analysis of menthol in pharmaceutical dosage formulation and raw material.

  8. Determination of phosphine in plant materials: method optimization and validation in interlaboratory comparison tests.

    PubMed

    Amrein, Thomas M; Ringier, Lara; Amstein, Nathalie; Clerc, Laurence; Bernauer, Sabine; Baumgartner, Thomas; Roux, Bernard; Stebler, Thomas; Niederer, Markus

    2014-03-05

    The optimization and validation of a method for the determination of phosphine in plant materials are described. The method is based on headspace sampling over the sample heated in 5% sulfuric acid. Critical factors such as sample amount, equilibration conditions, method of quantitation, and matrix effects are discussed, and validation data are presented. Grinding of coarse samples does not lead to lower results and is a prerequisite for standard addition experiments, which present the most reliable approach for quantitation because of notable matrix effects. Two interlaboratory comparisons showed that results varied considerably and that an uncertainty of measurement of about 50% has to be assessed. Flame photometric and mass spectrometric detection gave similar results. The proposed method is well reproducible within one laboratory, and results from the authors' laboratories using different injection and detection techniques are very close to each other. The considerable variation in the interlaboratory comparison shows that this analysis is still challenging in practice and further proficiency testing is needed.

  9. [Validation of the HPLC method in the determination of dioxopromethazine and phenylephrine in eye drops].

    PubMed

    Hudecová, T; Hatrík, S; Zimová, N; Havránek, E

    2002-03-01

    The present paper introduces a rapid HPLC method for the determination of dioxopromethazine and phenylephrine in eye drops. The method uses a modified C18 stationary phase optimized for the separation of basic compounds and a methanol/1.5 mM phosphoric acid (60/40 v/v, pH 3.02) mobile phase. The flow rate is set to 2 ml/min, sample volume 20 microliters, and compounds are detected at 275 nm. Prior to analysis, the eye drops are diluted with water in a ratio of 1:50. The elaborated HPLC method and the chromatographic system were validated according to the procedure for the validation of chromatographic systems and methods.

  10. GC Method Validation for the Analysis of Menthol in Suppository Pharmaceutical Dosage Form

    PubMed Central

    Zaid, Abdel Naser; Jaradat, Nidal; Mousa, Ayman

    2017-01-01

    Menthol is widely used as a fragrance and flavor in the food and cosmetic industries. It is also used in the medical and pharmaceutical fields for its various biological effects. Gas chromatography (GC) is considered to be a sensitive method for the analysis of menthol. GC chromatographic separation was developed using capillary column (VF-624) and a flame ionization detector (FID). The method was validated as per ICH guidelines for various parameters such as precision, linearity, accuracy, solution stability, robustness, limit of detection, and quantification. The tested validation parameters were found to be within acceptable limits. The method was successfully applied for the quantification of menthol in suppositories formulations. Quality control departments and official pharmacopeias can use our developed method in the analysis of menthol in pharmaceutical dosage formulation and raw material. PMID:28367216

  11. Development and validation of stability-indicating HPLC method for determination of cefpirome sulfate.

    PubMed

    Zalewski, Przemysław; Skibiński, Robert; Cielecka-Piontek, Judyta; Bednarek-Rajewska, Katarzyna

    2014-01-01

    The stability-indicating LC assay method was developed and validated for quantitative determination of cefpirome sulfate (CPS) in the presence of degradation products formed during the forced degradation studies. An isocratic HPLC method was developed with Lichrospher RP-18 column, 5 μm particle size, 125 mm x 4 mm column and 12 mM ammonium acetate-acetonitrile (90 : 10 v/v) as a mobile phase. The flow rate of the mobile phase was 1.0 mL/min. Detection wavelength was 270 nm and temperature was 30 degrees C. Cefpirome sulfate as other cephalosporins was subjected to stress conditions of degradation in aqueous solutions including hydrolysis, oxidation, photolysis and thermal degradation. The developed method was validated with regard to linearity, accuracy, precision, selectivity and robustness. The method was applied successfully for identification and determination of cefpirome sulfate in pharmaceuticals and during kinetic studies.

  12. E-Flux2 and SPOT: Validated Methods for Inferring Intracellular Metabolic Flux Distributions from Transcriptomic Data

    PubMed Central

    Kim, Min Kyung; Lane, Anatoliy; Kelley, James J.; Lun, Desmond S.

    2016-01-01

    Background Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. Results We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open

  13. Cortisol in Poecilia latipinna: its identification and the validation of methods for its determination in plasma.

    PubMed

    Hargreaves, G; Ball, J N

    1977-09-01

    In vitro studies in which head kidney of Poecilia latipinna was incubated with labelled precursors have shown that cortisol is the only corticosteroid that could be detected as being produced by this tissue. Cortisol levels have been measured in the plasma of Poecilia latipinna by three methods. The routine use of two rapid and comparatively simple methods, the competitive protein binding assay and the radioimmunoassay, have been validated in terms of the more rigorous double isotope dilution derivative assay.

  14. Validation and Verification (V and V) Testing on Midscale Flame Resistant (FR) Test Method

    DTIC Science & Technology

    2016-12-16

    VERIFICATION (V&V) TESTING ON MIDSCALE FLAME RESISTANT (FR) TEST METHOD by Margaret Auerbach Thomas A. Godfrey Michael J. Grady Gary N. Proulx and...To) January 2015 – April 2015 4. TITLE AND SUBTITLE VALIDATION AND VERIFICATION (V&V) TESTING ON MIDSCALE FLAME RESISTANT (FR) TEST METHOD 5a...SUPPLEMENTARY NOTES 14. ABSTRACT The Midscale Test for FR Performance was developed to complement the capabilities of the ASTM F1930 Standard Test

  15. From the Bronx to Bengifunda (and other lines of flight): deterritorializing purposes and methods in science education research

    NASA Astrophysics Data System (ADS)

    Gough, Noel

    2011-03-01

    In this essay I explore a number of questions about purposes and methods in science education research prompted by my reading of Wesley Pitts' ethnographic study of interactions among four students and their teacher in a chemistry classroom in the Bronx, New York City. I commence three `lines of flight' (small acts of Deleuzo-Guattarian deterritorialization) that depart from the conceptual territory regulated by science education's dominant systems of signification and make new connections within and beyond that territory. I offer neither a comprehensive review nor a thorough critique of Wesley's paper but, rather, suggest some alternative directions for science education research in the genre he exemplifies.

  16. Reliability and concurrent validity of a novel method allowing for in-shoe measurement of navicular drop

    PubMed Central

    2014-01-01

    Background Increased navicular drop is associated with increased risk of lower extremity overuse injuries and foot orthoses are often prescribed to reduce navicular drop. For laboratory studies, transparent shoes may be used to monitor the effect of orthoses but no clinically feasible methods exist. We have developed a stretch-sensor that allows for in-shoe measurement of navicular drop but the reliability and validity is unknown. The purpose of this study was to investigate: 1) the reliability of the stretch-sensor for measuring navicular drop, and 2) the concurrent validity of the stretch-sensor compared to the static navicular drop test. Methods Intra- and inter-rater reliability was tested on 27 participants walking on a treadmill on two separate days. The stretch-sensor was positioned 20 mm posterior to the tip of the medial malleolus and 20 mm posterior to the navicular tuberosity. The participants walked six minutes on the treadmill before navicular drop was measured. Reliability was quantified by the Intraclass Correlation Coefficient (ICC 2.1) and agreement was quantified by Limits of Agreement (LOA). To assess concurrent validity, static navicular drop was measured with the stretch-sensor and compared with static navicular drop measured with a ruler on 27 new participants. Linear regression was used to measure concurrent validity. Results The reliability of the stretch-sensor was acceptable for barefoot measurement (intra- and inter-rater ICC: 0.76-0.84) but lower for in-shoe measurement (ICC: 0.65). There was a significant association between static navicular drop measured with the stretch-sensor compared with a ruler (r = 0.745, p < 0.001). Conclusion This study suggests that the stretch-sensor has acceptable reliability for dynamic barefoot measurement of navicular drop. Furthermore, the stretch-sensor shows concurrent validity compared with the static navicular drop test as performed by Brody. This new simple method may hold promise for both

  17. A simple liquid chromatography-tandem mass spectrometry method for urinary free cortisol analysis: suitable for routine purpose.

    PubMed

    Persichilli, Silvia; Gervasoni, Jacopo; Iavarone, Federica; Zuppi, Cecilia

    2010-10-01

    The best index of adrenal dysfunction is urinary free cortisol (UFC) measurements performed using a 24-h urine collection. This measurement is also useful in the investigation of Cushing's syndrome. In this paper, we report a simple and selective method for the analysis of UFC by liquid chromatography-tandem mass spectrometry (LC-MS/MS) suitable for use in a high-volume clinical laboratory routine. The results were compared to those obtained using a commercial immunoassay method used in our laboratory. Urine samples containing 50 ng of internal standard (Cortisol-9,11,12,12-d(4)) were deproteinized using centrifugal filters with a molecular weight 10,000 Da cut-off and injected on a reversed phase column. Cortisol was analyzed in highly selective reaction monitoring in positive atmospheric pressure chemical ionization mode, at a resolution of 0.4 amu full width half maximum, and following the transitions related to the precursor 363.2 for cortisol and 367.2 for deuterated cortisol. The method validation included analysis of precision, linearity, sensitivity, recovery and interference from structurally similar steroids. UFC from 230 subjects was measured using LC-MS/MS and electrochemiluminescence immunoassay (ECLIA) methods. The calibration curves exhibited linearity and reproducibility in the range 7-10,000 nmol/L. Total imprecision was lower than 10%. The limit of detection and limit of quantification were 2 and 7 nmol/L, respectively. Mean recovery was higher than 90%. Structurally similar steroids do not interfere in the proposed method, but cause a significant change in the ECLIA results. Cortisol values obtained using the ECLIA method were always higher than those obtained using the LC-MS/MS method, with the bias directly proportional to cortisol concentrations. The reference values calculated using 180 normal subjects were 11-70 μg/day. The proposed method is sensitive, simple, free from interferences and reliable for routine use.

  18. Validity and reliability of autofluorescence-based quantification method of dental plaque.

    PubMed

    Han, Sun-Young; Kim, Bo-Ra; Ko, Hae-Youn; Kwon, Ho-Keun; Kim, Baek-Il

    2015-12-01

    The aim of this study was to evaluate validity and reliability of autofluorescence-based plaque quantification (APQ) method. The facial surfaces of 600 sound anterior teeth of 50 subjects were examined. The subjects received dental plaque examination using Turesky modified Quigley Hein plaque index (QHI) and Silness & Löe plaque index (SLI). The autofluorescence images were taken before the plaque examination with Quantitative Light-induced Fluorescence-Digital, and plaque percent index (PPI) was calculated. Correlation between two existing plaque indices and the PPI of the APQ method was evaluated to find which level of plaque redness on tooth (ΔR) by the APQ method shows the highest correlation. The area under the ROC curve (AUC) analysis and intra- and inter-examiner reliability tests were performed. The PPIΔR20 of the APQ method showed a moderate correlation with two existing plaque indices (rho of QHI=0.48, SLI=0.51). This methodology fell in the fair category and it had an excellent reliability. The APQ method also showed possibility to detect heavy plaque with fair validity. The APQ method demonstrated excellent reliability, and fair validity, compared with 2 conventional indices. The plaque quantification described has the potential to be used in clinical evaluation of oral hygiene procedures. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. HPLC/UV or bioassay: two valid methods for posaconazole quantification in human serum samples.

    PubMed

    Cendejas-Bueno, E; Forastiero, A; Rodriguez-Tudela, J L; Cuenca-Estrella, M; Gomez-Lopez, A

    2012-12-01

    In this paper we report on the development and validation of two different methods for posaconazole quantification from serum samples, HPLC/UV and bioassay. Both methods have been validated according to international guidelines and were also applied to the analysis of 61 trough serum samples from treated patients. A good correlation between both methods was observed. The HPLC method, more laborious and expensive, was demonstrated to be more accurate, precise and faster (analytical range 0.125-16 μg/mL, accuracy between -2.48 and 3.70% and precision between 2.77 and 5.93%, with an analytical run time of 11 min), making it a valuable tool for reference laboratories that centralize high numbers of samples. The microbiological method, however, is simple and offers sufficient precision and accuracy (analytical range 0.125-16 μg/mL, accuracy between -8.10 and 3.77% and a precision between 4.52 and 10.07%), to be used to monitor posaconazole. It may be a valid alternative to chromatographic methods in clinical laboratories without specialized facilities. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.

  20. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss.

    PubMed

    Vitello, Dominic J; Ripper, Richard M; Fettiplace, Michael R; Weinberg, Guy L; Vitello, Joseph M

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R (2) = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R (2) value was 0.1767. Conclusions. The R (2) value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water.

  1. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss

    PubMed Central

    Vitello, Dominic J.; Ripper, Richard M.; Fettiplace, Michael R.; Weinberg, Guy L.; Vitello, Joseph M.

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R 2 = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R 2 value was 0.1767. Conclusions. The R 2 value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water. PMID:26464949

  2. Co-validation of three methods for optical characterization of point-focus concentrators

    SciTech Connect

    Wendelin, T.J.; Grossman, J.W.

    1994-10-01

    Three different methods for characterizing point-focus solar concentrator optical performance have been developed for specific applications. These methods include a laser ray trace technique called the Scanning Hartmann Optical Test, a video imaging process called the 2f Technique and actual on-sun testing in conjunction with optical computer modeling. Three concentrator test articles, each of a different design, were characterized using at least two of the methods and, in one case, all three. The results of these tests are compared in order to validate the methods. Excellent agreement is observed in the results, suggesting that the techniques provide consistent and accurate characterizations of solar concentrator optics.

  3. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  4. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  5. Development and validation of an HPLC method for quality control of Pueraria lobata flower.

    PubMed

    Bebrevska, Lidiya; Bravo, Luis; Vandervoort, Jo; Pieters, Luc; Vlietinck, Arnold; Apers, Sandra

    2007-12-01

    Pueraria lobata, also known as Kudzu (Japan) or Ge (China), is a medicinal plant widely used in Oriental traditional medicine. In this study the development, optimization and validation of an HPLC ethod for quality control of Pueraria flower plant material is presented. By means of this analytical method the three major compounds, i. e., the isoflavones tectorigenin 7- O-[beta- D-xylopyranosyl-(1 - 6)-beta- D-glucopyranoside], tectorigenin 7- O-beta- D-glucopyranoside and tectorigenin, were quantified, using the isoflavones genistin and genistein as external standards. The extraction procedure, the extraction solvent, the extraction yields and the HPLC conditions were evaluated and optimized. The samples were analyzed on an RP C18 column, and eluted with a binary system consisting of water and methanol using a linear gradient; detection was at 262 nm. Tectorigenin used in the recovery experiments was isolated and purified in the laboratory. The final method was fully validated according to the ICH guidelines in terms of linearity, precision and accuracy. The validation data showed that the precision, (RSD% (betweendays) of 3.1, 2.84 and 1.77 for the three major compounds, respectively), and the accuracy (recovery of 104.2 %) were acceptable. These validation results demonstrate the suitability of the method for the quality control of this crude drug.

  6. Potential of accuracy profile for method validation in inductively coupled plasma spectrochemistry

    NASA Astrophysics Data System (ADS)

    Mermet, J. M.; Granier, G.

    2012-10-01

    Method validation is usually performed over a range of concentrations for which analytical criteria must be verified. One important criterion in quantitative analysis is accuracy, i.e. the contribution of both trueness and precision. The study of accuracy over this range is called an accuracy profile and provides experimental tolerance intervals. Comparison with acceptability limits fixed by the end user defines a validity domain. This work describes the computation involved in the building of the tolerance intervals, particularly for the intermediate precision with within-laboratory experiments and for the reproducibility with interlaboratory studies. Computation is based on ISO 5725-4 and on previously published work. Moreover, the bias uncertainty is also computed to verify the bias contribution to accuracy. The various types of accuracy profile behavior are exemplified with results obtained by using ICP-MS and ICP-AES. This procedure allows the analyst to define unambiguously a validity domain for a given accuracy. However, because the experiments are time-consuming, the accuracy profile method is mainly dedicated to method validation.

  7. A comparison of a 30-cluster survey method used in India and a purposive method in the estimation of immunization coverages in Tamil Nadu.

    PubMed

    Murthy, B N; Ezhil, R; Venkatasubramanian, S; Ramalingam, N; Periannan, V; Ganesan, R; Ramani, N; Selvaraj, V

    1995-01-01

    A 30-cluster survey method that is employed for estimating immunization coverages by the Government of India (GOI) was compared with a Purposive method, to investigate whether the likely omission of SC/ST and backward classes in the former would lead to the reporting of higher coverages. The essential difference between the two methods is in the manner in which the first household is selected in the chosen first stage sampling units (villages). With the GOI method, it is often close to the village centre, whereas with the Purposive method it is always in the periphery or in a pocket consisting of SC/ST or backward classes. A concurrent comparison of the two methods in three districts in Tamil Nadu showed no real differences in the coverage with DPT and BCG vaccines. However, the coverage was consistently higher by the GOI method in the case of the Polio vaccine (by 1.5%, 3.1% and 5.3% in the 3 districts), and the Measles vaccine (by 4.8%, 13.3% and 13.9%); the average difference was 3.3% for Polio vaccine (p = 0.08) and 7.3% for Measles vaccine (p = 0.01).

  8. Gemifloxacin mesylate (GFM) stability evaluation applying a validated bioassay method and in vitro cytotoxic study.

    PubMed

    Paim, Clésio S; Führ, Fernanda; Barth, Aline B; Gonçalves, Carlos E I; Nardi, Nance; Steppe, Martin; Schapoval, Elfrides E S

    2011-02-15

    The validation of a microbiological assay applying the cylinder-plate method to determine the quinolone gemifloxacin mesylate (GFM) content is described. Using a strain of Staphylococcus epidermidis ATCC 12228 as the test organism, the GFM content in tablets at concentrations ranging from 0.5 to 4.5 μg mL(-1) could be determined. A standard curve was obtained by plotting three values derived from the diameters of the growth inhibition zone. A prospective validation showed that the method developed is linear (r=0.9966), precise (repeatability and intermediate precision), accurate (100.63%), specific and robust. GFM solutions (from the drug product) exposed to direct UVA radiation (352 nm), alkaline hydrolysis, acid hydrolysis, thermal stress, hydrogen peroxide causing oxidation, and a synthetic impurity were used to evaluate the specificity of the bioassay. The bioassay and the previously validated high performance liquid chromatographic (HPLC) method were compared using Student's t test, which indicated that there was no statistically significant difference between these two validated methods. These studies demonstrate the validity of the proposed bioassay, which allows reliable quantification of GFM in tablets and can be used as a useful alternative methodology for GFM analysis in stability studies and routine quality control. The GFM reference standard (RS), photodegraded GFM RS, and synthetic impurity samples were also studied in order to determine the preliminary in vitro cytotoxicity to peripheral blood mononuclear cells. The results indicated that the GFM RS and photodegraded GFM RS were potentially more cytotoxic than the synthetic impurity under the conditions of analysis applied. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.

  9. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1314 and Method 1315

    EPA Science Inventory

    This report summarizes the results of an interlaboratory study conducted to generate precision estimates for two leaching methods under review by the U.S. EPA’s OSWER for inclusion into the EPA’s SW-846: Method 1314: Liquid-Solid Partitioning as a Function of Liquid...

  10. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1314 and Method 1315

    EPA Science Inventory

    This report summarizes the results of an interlaboratory study conducted to generate precision estimates for two leaching methods under review by the U.S. EPA’s OSWER for inclusion into the EPA’s SW-846: Method 1314: Liquid-Solid Partitioning as a Function of Liquid...

  11. A fast and reliable method for GHB quantitation in whole blood by GC-MS/MS (TQD) for forensic purposes.

    PubMed

    Castro, André L; Tarelho, Sónia; Dias, Mário; Reis, Flávio; Teixeira, Helena M

    2016-02-05

    Gamma-hydroxybutyric acid (GHB) is an endogenous compound with a story of clinical use since the 1960s. However, due to its secondary effects, it has become a controlled substance, entering the illicit market. A fully validated, sensitive and reproducible method for the quantification of GHB by methanolic precipitation and GC-MS/MS (TQD) in whole blood is presented. Using 100μL of whole blood, obtained results included a LOD and LLOQ of 0.1mg/L and a recovery of 86% in a working range between 0.1 and 100mg/L. This method is sensitive and specific to detect the presence of GHB in small amounts of whole blood (both ante-mortem or post-mortem), and is, to the authors' knowledge, the first GC-MS-MS TQD method that uses different precursor ions and product ions for the identification of GHB and GHB-D6 (internal standard). Hence, this method may be especially useful for the study of endogenous values in this biological sample. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. SWeRF--A method for estimating the relevant fine particle fraction in bulk materials for classification and labelling purposes.

    PubMed

    Pensis, Ingeborg; Luetzenkirchen, Frank; Friede, Bernd

    2014-05-01

    In accordance with the European regulation for classification, labelling and packaging of substances and mixtures (CLP) as well as the criteria as set out in the Globally Harmonized System (GHS), fine fraction of crystalline silica (CS) has been classified as a specific target organ toxicity, the specific organ in this case being the lung. Generic cut-off values for products containing a fine fraction of CS trigger the need for a method for the quantification of the fine fraction of CS in bulk materials. This article describes the so-called SWeRF method, the size-weighted relevant fine fraction. The SWeRF method combines the particle size distribution of a powder with probability factors from the EN 481 standard and allows the relevant fine fraction of a material to be calculated. The SWeRF method has been validated with a number of industrial minerals. This will enable manufacturers and blenders to apply the CLP and GHS criteria for the classification of mineral products containing RCS a fine fraction of CS.

  13. SWeRF—A Method for Estimating the Relevant Fine Particle Fraction in Bulk Materials for Classification and Labelling Purposes

    PubMed Central

    2014-01-01

    In accordance with the European regulation for classification, labelling and packaging of substances and mixtures (CLP) as well as the criteria as set out in the Globally Harmonized System (GHS), fine fraction of crystalline silica (CS) has been classified as a specific target organ toxicity, the specific organ in this case being the lung. Generic cut-off values for products containing a fine fraction of CS trigger the need for a method for the quantification of the fine fraction of CS in bulk materials. This article describes the so-called SWeRF method, the size-weighted relevant fine fraction. The SWeRF method combines the particle size distribution of a powder with probability factors from the EN 481 standard and allows the relevant fine fraction of a material to be calculated. The SWeRF method has been validated with a number of industrial minerals. This will enable manufacturers and blenders to apply the CLP and GHS criteria for the classification of mineral products containing RCS a fine fraction of CS. PMID:24389081

  14. Determination of vitamin C in foods: current state of method validation.

    PubMed

    Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C

    2014-11-21

    Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review.

  15. A Conventional Method for Valid "Actual Soil pH" Measurement.

    PubMed

    Oman, Srečko F

    2012-12-01

    After recognition of the Suspension Effect problem in potentiometric measurements in aqueous suspensions, no scientific consensus about its cause and nature was obtained. Numerous conventional methods of soil pH measurement were therefore introduced for practical soil pH determination. Most of the results of these methods are not valid with regard to the international pH scale. The method proposed in the present work rejects improper procedures and introduces correct soil sampling and a suitable pH measuring technique, as follows: the indicator glass electrode, substituting for roots in the soil, is inserted in a partly diluted sample suspension of the original soil and the modified reference electrode contacts the sample in a manner that eliminates the abnormal liquid junction potential. "Actual soil pH values" measured in this way are valid but the method used is a conventional one. Namely, the irreversible potential of the glass electrode includes the suspension effect of the first kind (SE1) and is a mixed steady-state potential. It is considered by convention as a substitute for and equivalent to the equilibrium potential which as a rule does not exist in a suspension. The soil pH values measured by the proposed conventional method are reproducible and valid with regard to the international hpH scale. They could be considered as the pH values, with uncertainty of +/- 0.1 pH unit, to which the roots are exposed.

  16. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  17. Validity and feasibility of a digital diet estimation method for use with preschool children: a pilot study

    USDA-ARS?s Scientific Manuscript database

    The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in Head Start. Preschool children and their caregivers participated in validation (n=22) and feasibility (n=24) pilot studies. Validity was determined in the metabolic...

  18. Validation of the Endopep-MS method for qualitative detection of active botulinum neurotoxins in human and chicken serum

    PubMed Central

    Björnstad, Kristian; Åberg, Annica Tevell; Kalb, Suzanne R.; Wang, Dongxia; Barr, John R.; Bondesson, Ulf; Hedeland, Mikael

    2015-01-01

    Botulinum neurotoxins (BoNTs) are highly toxic proteases produced by anaerobic bacteria. Traditionally, a mouse bioassay (MBA) has been used for detection of BoNTs, but for a long time, laboratories have worked with alternative methods for their detection. One of the most promising in vitro methods is a combination of an enzymatic and mass spectrometric assay called Endopep-MS. However, no comprehensive validation of the method has been presented. The main purpose of this work was to perform an in-house validation for the qualitative analysis of BoNT-A, B, C, C/D, D, D/C, E, and F in serum. The limit of detection (LOD), selectivity, precision, stability in matrix and solution, and correlation with the MBA were evaluated. The LOD was equal to or even better than that of the MBA for BoNT-A, B, D/C, E, and F. Furthermore, Endopep-MS was for the first time successfully used to differentiate between BoNT-C, D and their mosaics C/D and D/C by different combinations of antibodies and target peptides. In addition, sequential antibody capture was presented as a new way to multiplex the method when only a small sample volume is available. In the comparison with the MBA, all the samples analyzed were positive for BoNT-C/D with both methods. These results indicate that the Endopep-MS method is a good alternative to the MBA as the gold standard for BoNT detection based on its sensitivity, selectivity, speed, and that it does not require experimental animals. PMID:25228079

  19. Tocopherol and tocotrienol analysis in raw and cooked vegetables: a validated method with emphasis on sample preparation.

    PubMed

    Knecht, Katharina; Sandfuchs, Katja; Kulling, Sabine E; Bunzel, Diana

    2015-02-15

    Vegetables can be important dietary sources of vitamin E. However, data on vitamin E in raw and cooked vegetables are in part conflicting, indicating analytical pitfalls. The purpose of the study was to develop and validate an HPLC-FLD method for tocochromanol (tocopherols and tocotrienols) analysis equally suitable for raw and cooked vegetables. Significant instability of tocochromanols was observed in raw broccoli and carrot homogenates. Tocochromanols could be stabilized by freeze-drying or ascorbic acid addition prior to homogenization. The optimized protocol for tocochromanol analysis included knife and ball milling of freeze-dried vegetable pieces. Direct acetone extraction of vegetable powders allowed for satisfactory recoveries and precisions. A significant decrease of tocochromanols in baked compared to raw vegetables was shown, the extent of which varied largely between vegetables. For some raw vegetables, such as spinach or broccoli, underestimation of vitamin E in nutrient databases cannot be ruled out and should be examined. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Comparison of the quantitative performances and measurement uncertainty estimates obtained during method validation versus routine applications of a novel hydrophilic interaction chromatography method for the determination of cidofovir in human plasma.

    PubMed

    Lecomte, F; Hubert, C; Demarche, S; De Bleye, C; Dispas, A; Jost, M; Frankenne, F; Ceccato, A; Rozet, E; Hubert, Ph

    2012-01-05

    Method validation is essential to ensure that an analytical method is fit for its intended purpose. Additionally, it is advisable to estimate measurement uncertainty in order to allow a correct interpretation of the results generated by analytical methods. Measurement uncertainty can be efficiently estimated during method validation as a top-down approach. However, method validation predictions of the quantitative performances of the assay and estimations of measurement uncertainty may be far away from the real performances obtained during the routine application of this assay. In this work, the predictions of the quantitative performances and measurement uncertainty estimations obtained from a method validation are compared to those obtained during routine applications of a bioanalytical method. For that purpose, a new hydrophilic interaction chromatography (HILIC) method was used. This method was developed for the determination of cidofovir, an antiviral drug, in human plasma. Cidofovir (CDV) is a highly polar molecule presenting three ionizable functions. Therefore, it is an interesting candidate for determination by HILIC mode. CDV is an acyclic cytidine monophosphate analog that has a broad antiviral spectrum and is currently undergoing evaluation in clinical trials as a topical agent for treatment of papillomavirus infections. The analytical conditions were optimized by means of design of experiments approach in order to obtain robust analytical conditions. These ones were absolutely necessary to enable the comparisons mentioned above. After a sample clean-up by means of solid phase extraction, the chromatographic analysis was performed on bare silica stationary phase using a mixture of acetonitrile-ammonium hydrogen carbonate (pH 7.0; 20mM) (72:28, v/v) as mobile phase. This newly developed bioanalytical method was then fully validated according to FDA (Food and Drug Administration) requirements using a total error approach that guaranteed that each future

  1. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10(8) CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (aw), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; vair = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, aw, and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  2. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy.

  3. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    SciTech Connect

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  4. Development and validation of a molecular size distribution method for polysaccharide vaccines.

    PubMed

    Clément, G; Dierick, J-F; Lenfant, C; Giffroy, D

    2014-01-01

    Determination of the molecular size distribution of vaccine products by high performance size exclusion chromatography coupled to refractive index detection is important during the manufacturing process. Partial elution of high molecular weight compounds in the void volume of the chromatographic column is responsible for variation in the results obtained with a reference method using a TSK G5000PWXL chromatographic column. GlaxoSmithKline Vaccines has developed an alternative method relying on the selection of a different chromatographic column with a wider separation range and the generation of a dextran calibration curve to determine the optimal molecular weight cut-off values for all tested products. Validation of this method was performed according to The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). The new method detected product degradation with the same sensitivity as that observed for the reference method. All validation parameters were within the pre-specified range. Precision (relative standard deviation (RSD) of mean values) was < 5 per cent (intra-assay) and < 10 per cent (inter-assay). Sample recovery was > 70 per cent for all polysaccharide conjugates and for the Haemophilus influenzae type B final container vaccine. All results obtained for robustness met the acceptance criteria defined in the validation protocol (≤ 2 times (RSD) or ≤ 2 per cent difference between the modified and the reference parameter value if RSD = 0 per cent). The new method was shown to be a suitable quality control method for the release and stability follow-up of polysaccharide-containing vaccines. The new method gave comparable results to the reference method, but with less intra- and inter-assay variability.

  5. A statistical method (cross-validation) for bone loss region detection after spaceflight

    PubMed Central

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  6. Development and Validation of HPTLC Method for the Estimation of Rizatriptan Benzoate in Bulk and Tablets

    PubMed Central

    Sundar, B. Syama; Suneetha, A.

    2010-01-01

    A new, simple high performance thin layer chromatographic method has been proposed for the determination of rizatriptan benzoate in a tablet dosage form. The drug was separated on aluminum plates precoated with silica gel 60 F254 with dichloromethane-acetone-acetic acid 3:2:0.2(v/v/v) as mobilephase. Quantitative analysis was performed by densitometric scanning at 230 nm. The method was validated for linearity, accuracy, precision and robustness. The calibration plot was linear over the range 200-700 ng/band for rizatriptan benzoate. The method was successfully applied to the analysis of drug in bulk and marketed tablets. PMID:21969758

  7. A statistical method (cross-validation) for bone loss region detection after spaceflight.

    PubMed

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W; Kornak, John; Lang, Thomas F; Fang, Jiqian; Lu, Ying

    2010-06-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes.

  8. Validity and reliability of the Thai version of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU)

    PubMed Central

    Pipanmekaporn, Tanyong; Wongpakaran, Nahathai; Mueankwan, Sirirat; Dendumrongkul, Piyawat; Chittawatanarat, Kaweesak; Khongpheng, Nantiya; Duangsoy, Nongnut

    2014-01-01

    Purpose The purpose of this study was to determine the validity and reliability of the Thai version of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU), when compared to the diagnoses made by delirium experts. Patients and methods This was a cross-sectional study conducted in both surgical intensive care and subintensive care units in Thailand between February–June 2011. Seventy patients aged 60 years or older who had been admitted to the units were enrolled into the study within the first 48 hours of admission. Each patient was randomly assessed as to whether they had delirium by a nurse using the Thai version of the CAM-ICU algorithm (Thai CAM-ICU) or by a delirium expert using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision. Results The prevalence of delirium was found to be 18.6% (n=13) by the delirium experts. The sensitivity of the Thai CAM-ICU’s algorithms was found to be 92.3% (95% confidence interval [CI] =64.0%−99.8%), while the specificity was 94.7% (95% CI =85.4%−98.9%). The instrument displayed good interrater reliability (Cohen’s κ =0.81; 95% CI =0.64−0.99). The time taken to complete the Thai CAM-ICU was 1 minute (interquatile range, 1−2 minutes). Conclusion The Thai CAM-ICU demonstrated good validity, reliability, and ease of use when diagnosing delirium in a surgical intensive care unit setting. The use of this diagnostic tool should be encouraged for daily, routine use, so as to promote the early detection of delirium and its rapid treatment. PMID:24904208

  9. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  10. Improvement of the validity of the simplified modal method for designing a subwavelength dielectric transmission grating.

    PubMed

    Jing, Xufeng; Zhang, Junchao; Tian, Ying; Jin, Shangzhong

    2014-01-10

    To accurately and easily design the diffraction characteristics of a rectangular transmission grating under the illumination of Littrow mounting, the validity and limitation of the simplified modal method is evaluated by a comparison of diffraction efficiencies predicted by the modal approach to exact results calculated with rigorous coupled-wave analysis. The influence of the grating normalized period, the normalized groove depth, and the fill factor on the accuracy of the modal method is quantitatively determined. More importantly, the reflection effect of two propagating grating modes with the optical thin-film model and the nonsymmetrical Fabry-Perot model is proposed and applied in the modal method to improve the accuracy of the calculated diffraction efficiencies. Generally, it is found that the thin-film model of reflection loss is valid at the smaller normalized period, but the Fabry-Perot model can exactly calculate the reflection loss of grating modes at the larger normalized period. Based on the fact that the validity of the modal approach is determined independently of the incident wavelength, the exact design and analysis of grating diffraction elements can be implemented at different wavelengths by simply scaling the grating parameters. Moreover, the polarization effect of diffraction properties on the limitation of the modal method without and with the reflection loss of grating modes is clearly demonstrated.

  11. Development and validation of UFLC-MS/MS method for determination of bosentan in rat plasma.

    PubMed

    Atila, Alptug; Ozturk, Murat; Kadioglu, Yucel; Halici, Zekai; Turkan, Didar; Yayla, Muhammed; Un, Harun

    2014-08-01

    A rapid, simple and sensitive UFLC-MS/MS method was developed and validated for the determination of bosentan in rat plasma using etodolac as an internal standard (IS) after liquid-liquid extraction with diethyl ether-chloroform (4:1, v/v). Bosentan and IS were detected using electrospray ionization in positive ion multiple reaction monitoring mode by monitoring the transitions m/z 551.90→201.90 and 288.20→172.00, respectively. Chromatographic separation was performed on the inertsil ODS-4 column with a gradient mobile phase, which consisted of 0.1% acetic acid with 5mM ammonium acetate in water for solvent A and 5mM ammonium acetate in acetonitrile-methanol (50:50, v/v) for solvent B at a flow rate of 0.3mL/min. The method was sensitive with 0.5ng/mL as the lower limit of quantitation (LLOQ) and the standard calibration curve for bosentan was linear (r>0.997) over the studied concentration range (0.5-2000ng/mL). The proposed method was fully validated by determining specificity, linearity, LLOQ, precision and accuracy, recovery, matrix effect and stability. The validated method was successfully applied to plasma samples obtained from rats.

  12. HPLC-UV method validation for the identification and quantification of bioactive amines in commercial eggs.

    PubMed

    de Figueiredo, Tadeu Chaves; de Assis, Débora Cristina Sampaio; Menezes, Liliane Denize Miranda; da Silva, Guilherme Resende; Lanza, Isabela Pereira; Heneine, Luiz Guilherme Dias; Cançado, Silvana de Vasconcelos

    2015-09-01

    A quantitative and confirmatory high-performance liquid chromatography with ultraviolet detection (HPLC-UV) method for the determination of bioactive amines in the albumen and yolk of commercial eggs was developed, optimized and validated by analyte extraction with trichloroacetic acid and pre-column derivatization with dansyl chloride. Phenylethylamine, putrescine, cadaverine, histamine, tyramine, spermidine and spermine standards were used to evaluate the following performance parameters: limit of detection (LoD), limit of quantification (LoQ), selectivity, linearity, precision, recovery and ruggedness. The LoD of the method was defined from 0.2 to 0.3 mg kg(-1) for the yolk matrix and from 0.2 to 0.4 mg kg(-1) for the albumen matrix; the LoQ was from 0.7 to 1.0 mg kg(-1) for the yolk matrix and from 0.7 to 1.1 mg kg(-1) for the albumen matrix. The validated method exhibited excellent selectivity and separation of all amines with coefficients of determination higher than 0.99. The obtained recovery values were from 90.5% to 108.3%, and the relative standard deviation (RSD) was lower than 10% under repeatability conditions for the studied analytes. The performance parameters show the validated method to be adequate for the determination of bioactive amines in egg albumen and yolk.

  13. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  14. Experimental validation of a modal flexibility-based damage detection method for a cyber-physical system

    NASA Astrophysics Data System (ADS)

    Martinez-Castro, Rosana E.; Eskew, Edward L.; Jang, Shinae

    2014-03-01

    The detection and localization of damage in a timely manner is critical in order to avoid the failure of structures. When a structure is subjected to an unscheduled impulsive force, the resulting damage can lead to failure in a very short period of time. As such, a monitoring strategy that can adapt to variability in the environment and that anticipates changes in physical processes has the potential of detecting, locating and mitigating damage. These requirements can be met by a cyber-physical system (CPS) equipped with Wireless Smart Sensor Network (WSSN) systems that is capable of measuring and analyzing dynamic responses in real time using on-board in network processing. The Eigenparameter Decomposition of Structural Flexibility Change (ED) Method is validated with real data and considered to be used in the computational core of this CPS. The condition screening is implemented on a damaged structure and compared to an original baseline calculation, hence providing a supervised learning environment. An experimental laboratory study on a 5-story shear building with three damage conditions subjected to an impulsive force has been chosen to validate the effectiveness of the method proposed to locate and quantify the extent of damage. A numerical simulation of the same building subject to band-limited white noise has also been developed with this purpose. The effectiveness of the ED Method to locate damage is compared to that of the Damage Index Method. With some modifications, the ED Method is capable of locating and quantifying damage satisfactorily in a shear building subject to a lower frequency content predominant excitation.

  15. Optimization and validation of a high-performance liquid chromatography method for the analysis of cardiac glycosides in Digitalis lanata.

    PubMed

    Pellati, Federica; Bruni, Renato; Bellardi, Maria Grazia; Bertaccini, Assunta; Benvenuti, Stefania

    2009-04-10

    In this study, a simple and reliable HPLC method for the qualitative and quantitative analysis of cardiac glycosides in Digitalis lanata Ehrh. raw material was developed and applied to healthy and phytoplasma-infected plants. The target analytes cover a broad range of secondary metabolites, including primary, secondary and tertiary glycosides and the corresponding aglycones. The sample preparation was carried out by sonication of the plant material with 70% (v/v) aqueous methanol at room temperature, followed by reversed-phase solid-phase extraction purification from interfering pigments. The HPLC analyses were performed on a Symmetry C(18) column (75 mm x 4.6mm I.D., 3.5 microm), with a gradient elution composed of water and acetonitrile, at a flow rate of 1.0 mL/min. The column temperature was set at 20 degrees C and the photodiode array detector monitored the eluent at 220 nm. The method was validated with respect to ICH guidelines and the validation parameters were found to be highly satisfactory. The application of the method to the analysis of D. lanata leaves indicated that air-drying was the optimum method for raw material processing when compared with freeze-drying. The analysis of healthy and phytoplasma-infected plants demonstrated that the secondary metabolite mainly affected by the pathogen presence was lanatoside C (153.2 microg/100mg versus 76.1 microg/100mg). Considering the importance of D. lanata plant material as source of cardiac glycosides, the developed method can be considered suitable for the phytochemical analysis and for the quality assurance of D. lanata used for pharmaceutical purpose.

  16. Developmental and internal validation of a novel 13 loci STR multiplex method for Cannabis sativa DNA profiling.

    PubMed

    Houston, Rachel; Birck, Matthew; Hughes-Stamm, Sheree; Gangitano, David

    2017-05-01

    Marijuana (Cannabis sativa L.) is a plant cultivated and trafficked worldwide as a source of fiber (hemp), medicine, and intoxicant. The development of a validated method using molecular techniques such as short tandem repeats (STRs) could serve as an intelligence tool to link multiple cases by means of genetic individualization or association of cannabis samples. For this purpose, a 13 loci STR multiplex method was developed, optimized, and validated according to relevant ISFG and SWGDAM guidelines. The STR multiplex consists of 13 previously described C. sativa STR loci: ANUCS501, 9269, 4910, 5159, ANUCS305, 9043, B05, 1528, 3735, CS1, D02, C11, and H06. A sequenced allelic ladder consisting of 56 alleles was designed to accurately genotype 101 C. sativa samples from three seizures provided by a U.S. Customs and Border Protection crime lab. Using an optimal range of DNA (0.5-1.0ng), validation studies revealed well-balanced electropherograms (inter-locus balance range: 0.500-1.296), relatively balanced heterozygous peaks (mean peak height ratio of 0.83 across all loci) with minimal artifacts and stutter ratio (mean stutter of 0.021 across all loci). This multi-locus system is relatively sensitive (0.13ng of template DNA) with a combined power of discrimination of 1 in 55 million. The 13 STR panel was found to be species specific for C. sativa; however, non-specific peaks were produced with Humulus lupulus. The results of this research demonstrate the robustness and applicability of this 13 loci STR system for forensic DNA profiling of marijuana samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Development of a benchtop baking method for chemically leavened crackers. II. Validation of the method

    USDA-ARS?s Scientific Manuscript database

    A benchtop baking method has been developed to predict the contribution of gluten functionality to overall flour performance for chemically leavened crackers. Using a diagnostic formula and procedure, dough rheology was analyzed to evaluate the extent of gluten development during mixing and machinin...

  18. [Validation Study for Analytical Method of Diarrhetic Shellfish Poisons in 9 Kinds of Shellfish].

    PubMed

    Yamaguchi, Mizuka; Yamaguchi, Takahiro; Kakimoto, Kensaku; Nagayoshi, Haruna; Okihashi, Masahiro; Kajimura, Keiji

    2016-01-01

    A method was developed for the simultaneous determination of okadaic acid, dinophysistoxin-1 and dinophysistoxin-2 in shellfish using ultra performance liquid chromatography with tandem mass spectrometry. Shellfish poisons in spiked samples were extracted with methanol and 90% methanol, and were hydrolyzed with 2.5 mol/L sodium hydroxide solution. Purification was done on an HLB solid-phase extraction column. This method was validated in accordance with the notification of Ministry of Health, Labour and Welfare of Japan. As a result of the validation study in nine kinds of shellfish, the trueness, repeatability and within-laboratory reproducibility were 79-101%, less than 12 and 16%, respectively. The trueness and precision met the target values of notification.

  19. Validation of a liquid chromatographic method for determination of tacrolimus in pharmaceutical dosage forms.

    PubMed

    Moyano, María A; Simionato, Laura D; Pizzorno, María T; Segall, Adriana I

    2006-01-01

    An accurate, simple, and reproducible liquid chromatographic method was developed and validated for the determination of tacrolimus in capsules. The analysis is performed at room temperature on a reversed-phase C18 column with UV detection at 210 nm. The mobile phase is methanol-water (90 + 10) at a constant flow rate of 0.8 mL/min. The method was validated in terms of linearity, precision, accuracy, and specificity by forced decomposition of tacrolimus, using acid, base, water, hydrogen peroxide, heat, and light. The response was linear in the range of 0.09-0.24 mg/mL (r2 = 0.9997). The relative standard deviation values for intra- and interday precision studies were 1.28 and 2.91%, respectively. Recoveries ranged from 98.06 to 102.52%.

  20. Validity of Standard Measures of Family Planning Service Quality: Findings from the Simulated Client Method

    PubMed Central

    Tumlinson, Katherine; Speizer, Ilene S.; Curtis, Sian L.; Pence, Brian W.

    2014-01-01

    Despite widespread endorsement within the field of international family planning regarding the importance of quality of care as a reproductive right, the field has yet to develop validated data collection instruments to accurately assess quality in terms of its public health importance. This study, conducted among 19 higher volume public and private facilities in Kisumu, Kenya, used the simulated client method to test the validity of three standard data collection instruments included in large-scale facility surveys: provider interviews, client interviews, and observation of client-provider interactions. Results found low specificity and positive predictive values in each of the three instruments for a number of quality indicators, suggesting that quality of care may be overestimated by traditional methods. Revised approaches to measuring family planning service quality may be needed to ensure accurate assessment of programs and to better inform quality improvement interventions. PMID:25469929