Sample records for valid reference region

  1. Evaluation of a moderate resolution, satellite-based impervious surface map using an independent, high-resolution validation data set

    USGS Publications Warehouse

    Jones, J.W.; Jarnagin, T.

    2009-01-01

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.

  2. Analysis of Carbamate Pesticides: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS666

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less

  3. Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less

  4. Analysis of Thiodiglycol: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS777

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for the analysis of thiodiglycol, the breakdown product of the sulfur mustard HD, in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS777 (hereafter referred to as EPA CRL SOP MS777). This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to verifymore » the analytical procedures described in MS777 for analysis of thiodiglycol in aqueous samples. The gathered data from this study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS777 can be determined.« less

  5. Plans and progress for building a Great Lakes fauna DNA barcode reference library

    EPA Science Inventory

    DNA reference libraries provide researchers with an important tool for assessing regional biodiversity by allowing unknown genetic sequences to be assigned identities, while also providing a means for taxonomists to validate identifications. Expanding the representation of Great...

  6. Analysis of Phosphonic Acids: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled Analysis of Diisopropyl Methylphosphonate, Ethyl Hydrogen Dimethylamidophosphate, Isopropyl Methylphosphonic Acid, Methylphosphonic Acid, and Pinacolyl Methylphosphonic Acid in Water by Multiple Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry: EPA Version MS999. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures describedmore » in EPA Method MS999 for analysis of the listed phosphonic acids and surrogates in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of EPA Method MS999 can be determined.« less

  7. First Definition of Reference Intervals of Liver Function Tests in China: A Large-Population-Based Multi-Center Study about Healthy Adults

    PubMed Central

    Zhang, Chuanbao; Guo, Wei; Huang, Hengjian; Ma, Yueyun; Zhuang, Junhua; Zhang, Jie

    2013-01-01

    Background Reference intervals of Liver function tests are very important for the screening, diagnosis, treatment, and monitoring of liver diseases. We aim to establish common reference intervals of liver function tests specifically for the Chinese adult population. Methods A total of 3210 individuals (20–79 years) were enrolled in six representative geographical regions in China. Analytes of ALT, AST, GGT, ALP, total protein, albumin and total bilirubin were measured using three analytical systems mainly used in China. The newly established reference intervals were based on the results of traceability or multiple systems, and then validated in 21 large hospitals located nationwide qualified by the National External Quality Assessment (EQA) of China. Results We had been established reference intervals of the seven liver function tests for the Chinese adult population and found there were apparent variances of reference values for the variables for partitioning analysis such as gender(ALT, GGT, total bilirubin), age(ALP, albumin) and region(total protein). More than 86% of the 21 laboratories passed the validation in all subgroup of reference intervals and overall about 95.3% to 98.8% of the 1220 validation results fell within the range of the new reference interval for all liver function tests. In comparison with the currently recommended reference intervals in China, the single side observed proportions of out of range of reference values from our study for most of the tests deviated significantly from the nominal 2.5% such as total bilirubin (15.2%), ALP (0.2%), albumin (0.0%). Most of reference intervals in our study were obviously different from that of other races. Conclusion These used reference intervals are no longer applicable for the current Chinese population. We have established common reference intervals of liver function tests that are defined specifically for Chinese population and can be universally used among EQA-approved laboratories located all over China. PMID:24058449

  8. A hybrid deconvolution approach for estimation of in vivo non-displaceable binding for brain PET targets without a reference region

    PubMed Central

    Mann, J. John; Ogden, R. Todd

    2017-01-01

    Background and aim Estimation of a PET tracer’s non-displaceable distribution volume (VND) is required for quantification of specific binding to its target of interest. VND is generally assumed to be comparable brain-wide and is determined either from a reference region devoid of the target, often not available for many tracers and targets, or by imaging each subject before and after blocking the target with another molecule that has high affinity for the target, which is cumbersome and involves additional radiation exposure. Here we propose, and validate for the tracers [11C]DASB and [11C]CUMI-101, a new data-driven hybrid deconvolution approach (HYDECA) that determines VND at the individual level without requiring either a reference region or a blocking study. Methods HYDECA requires the tracer metabolite-corrected concentration curve in blood plasma and uses a singular value decomposition to estimate the impulse response function across several brain regions from measured time activity curves. HYDECA decomposes each region’s impulse response function into the sum of a parametric non-displaceable component, which is a function of VND, assumed common across regions, and a nonparametric specific component. These two components differentially contribute to each impulse response function. Different regions show different contributions of the two components, and HYDECA examines data across regions to find a suitable common VND. HYDECA implementation requires determination of two tuning parameters, and we propose two strategies for objectively selecting these parameters for a given tracer: using data from blocking studies, and realistic simulations of the tracer. Using available test-retest data, we compare HYDECA estimates of VND and binding potentials to those obtained based on VND estimated using a purported reference region. Results For [11C]DASB and [11C]CUMI-101, we find that regardless of the strategy used to optimize the tuning parameters, HYDECA provides considerably less biased estimates of VND than those obtained, as is commonly done, using a non-ideal reference region. HYDECA test-retest reproducibility is comparable to that obtained using a VND determined from a non-ideal reference region, when considering the binding potentials BPP and BPND. Conclusions HYDECA can provide subject-specific estimates of VND without requiring a blocking study for tracers and targets for which a valid reference region does not exist. PMID:28459878

  9. Sequencing and Validation of Reference Genes to Analyze Endogenous Gene Expression and Quantify Yellow Dwarf Viruses Using RT-qPCR in Viruliferous Rhopalosiphum padi

    PubMed Central

    Wu, Keke; Liu, Wenwen; Mar, Thithi; Liu, Yan; Wu, Yunfeng; Wang, Xifeng

    2014-01-01

    The bird cherry-oat aphid (Rhopalosiphum padi), an important pest of cereal crops, not only directly sucks sap from plants, but also transmits a number of plant viruses, collectively the yellow dwarf viruses (YDVs). For quantifying changes in gene expression in vector aphids, reverse transcription-quantitative polymerase chain reaction (RT-qPCR) is a touchstone method, but the selection and validation of housekeeping genes (HKGs) as reference genes to normalize the expression level of endogenous genes of the vector and for exogenous genes of the virus in the aphids is critical to obtaining valid results. Such an assessment has not been done, however, for R. padi and YDVs. Here, we tested three algorithms (GeNorm, NormFinder and BestKeeper) to assess the suitability of candidate reference genes (EF-1α, ACT1, GAPDH, 18S rRNA) in 6 combinations of YDV and vector aphid morph. EF-1α and ACT1 together or in combination with GAPDH or with GAPDH and 18S rRNA could confidently be used to normalize virus titre and expression levels of endogenous genes in winged or wingless R. padi infected with Barley yellow dwarf virus isolates (BYDV)-PAV and BYDV-GAV. The use of only one reference gene, whether the most stably expressed (EF-1α) or the least stably expressed (18S rRNA), was not adequate for obtaining valid relative expression data from the RT-qPCR. Because of discrepancies among values for changes in relative expression obtained using 3 regions of the same gene, different regions of an endogenous aphid gene, including each terminus and the middle, should be analyzed at the same time with RT-qPCR. Our results highlight the necessity of choosing the best reference genes to obtain valid experimental data and provide several HKGs for relative quantification of virus titre in YDV-viruliferous aphids. PMID:24810421

  10. Differential DNA Methylation Analysis without a Reference Genome.

    PubMed

    Klughammer, Johanna; Datlinger, Paul; Printz, Dieter; Sheffield, Nathan C; Farlik, Matthias; Hadler, Johanna; Fritsch, Gerhard; Bock, Christoph

    2015-12-22

    Genome-wide DNA methylation mapping uncovers epigenetic changes associated with animal development, environmental adaptation, and species evolution. To address the lack of high-throughput methods for DNA methylation analysis in non-model organisms, we developed an integrated approach for studying DNA methylation differences independent of a reference genome. Experimentally, our method relies on an optimized 96-well protocol for reduced representation bisulfite sequencing (RRBS), which we have validated in nine species (human, mouse, rat, cow, dog, chicken, carp, sea bass, and zebrafish). Bioinformatically, we developed the RefFreeDMA software to deduce ad hoc genomes directly from RRBS reads and to pinpoint differentially methylated regions between samples or groups of individuals (http://RefFreeDMA.computational-epigenetics.org). The identified regions are interpreted using motif enrichment analysis and/or cross-mapping to annotated genomes. We validated our method by reference-free analysis of cell-type-specific DNA methylation in the blood of human, cow, and carp. In summary, we present a cost-effective method for epigenome analysis in ecology and evolution, which enables epigenome-wide association studies in natural populations and species without a reference genome. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Field Demonstration and Validation of a New Device for Measuring Water and Solute Fluxes at CFB Borden

    DTIC Science & Technology

    2006-11-01

    All Quality Control Reference Materials are acquired only from authorized vendors or sources commonly used by U.S. EPA Regional Laboratories...Institue of Standards and Testing (NITS) Standard Reference Materials (SRM) or to the U.S. EPA Reference Standards. Working Standards The commercial...contaminants from clothing or equipment by blowing, shaking or any other means that may disperse material into the air is prohibited. 7.1.3. All disposable

  12. Field Demonstration and Validation of a New Device for Measuring Water and Solute Fluxes at Naval Base Ventura County (NBVC), Port Hueneme, CA

    DTIC Science & Technology

    2006-07-01

    All Quality Control Reference Materials are acquired only from authorized vendors or sources commonly used by U.S. EPA Regional Laboratories...are traceable to the National Institue of Standards and Testing (NITS) Standard Reference Materials (SRM) or to the U.S. EPA Reference Standards... clothing or equipment by blowing, shaking or any other means that may disperse material into the air is prohibited. 7.1.3. All disposable personal

  13. Ionospheric foF2 at EIA region: comparison between observations and IRI model

    NASA Astrophysics Data System (ADS)

    Chuo, Y. J.; Lee, C. C.

    We have used data from an equatorial ionization anomaly area station in the western Pacific region to study the monthly variability of foF2 Diurnal seasonal and solar activity effects were investigated The data established by this study are proposed as valid input values for the development of URSI and CCIR options for the International Reference Ionosphere

  14. Cross - Scale Intercomparison of Climate Change Impacts Simulated by Regional and Global Hydrological Models in Eleven Large River Basins

    NASA Technical Reports Server (NTRS)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.; hide

    2017-01-01

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.

  15. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climatemore » change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.« less

  16. The role of observational reference data for climate downscaling: Insights from the VALUE COST Action

    NASA Astrophysics Data System (ADS)

    Kotlarski, Sven; Gutiérrez, José M.; Boberg, Fredrik; Bosshard, Thomas; Cardoso, Rita M.; Herrera, Sixto; Maraun, Douglas; Mezghani, Abdelkader; Pagé, Christian; Räty, Olle; Stepanek, Petr; Soares, Pedro M. M.; Szabo, Peter

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of downscaling methods. Such assessments can be expected to crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling, observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. We here present a comprehensive assessment of the influence of uncertainties in observational reference data and of scale-related issues on several of the above-mentioned aspects. First, temperature and precipitation characteristics as simulated by a set of reanalysis-driven EURO-CORDEX RCM experiments are validated against three different gridded reference data products, namely (1) the EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. The analysis reveals a considerable influence of the choice of the reference data on the evaluation results, especially for precipitation. It is also illustrated how differences between the reference data sets influence the ranking of RCMs according to a comprehensive set of performance measures.

  17. Generating Ground Reference Data for a Global Impervious Surface Survey

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; De Colstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan

    2012-01-01

    We are developing an approach for generating ground reference data in support of a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. Since sufficient ground reference data for training and validation is not available from ground surveys, we are developing an interactive tool, called HSegLearn, to facilitate the photo-interpretation of 1 to 2 m spatial resolution imagery data, which we will use to generate the needed ground reference data at 30m. Through the submission of selected region objects and positive or negative examples of impervious surfaces, HSegLearn enables an analyst to automatically select groups of spectrally similar objects from a hierarchical set of image segmentations produced by the HSeg image segmentation program at an appropriate level of segmentation detail, and label these region objects as either impervious or nonimpervious.

  18. Creation of a Multiresolution and Multiaccuracy Dtm: Problems and Solutions for Heli-Dem Case Study

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Carcano, L.; Lucchese, A.; Negretti, M.

    2013-01-01

    The work is part of "HELI-DEM" (HELvetia-Italy Digital Elevation Model) project, funded by the European Regional Development Fund within the Italy-Switzerland cooperation program. The aim of the project is the creation of a unique DTM for the alpine and subalpine area between Italy (Piedmont, Lombardy) and Switzerland (Ticino and Grisons Cantons); at present, different DTMs, that are in different reference frames and have been obtained with different technologies, accuracies, and resolutions, have been acquired. The final DTM should be correctly georeferenced and produced validating and integrating the data that are available for the project. DTMs are fundamental in hydrogeological studies, especially in alpine areas where hydrogeological risks may exist. Moreover, when an event, like for example a landslide, happens at the border between countries, a unique and integrated DTM which covers the interest area is useful to analyze the scenario. In this sense, HELI-DEM project is helpful. To perform analyses along the borders between countries, transnational geographic information is needed: a transnational DTM can be obtained by merging regional low resolution DTMs. Moreover high resolution local DTMs should be used where they are available. To be merged, low and high resolution DTMs should be in the same three dimensional reference frame, should not present biases and should be consistent in the overlapping areas. Cross-validation between the different DTMs is therefore needed. Two different problems should be solved: the merging of regional, partly overlapping low and medium resolution DTMs into a unique low/medium resolution DTM and the merging with other local high resolution/high accuracy height data. This paper discusses the preliminary processing of the data for the fusion of low and high resolution DTMs in a study-case area within the Lombardy region: Valtellina valley. In this region the Lombardy regional low resolution DTM is available, with a horizontal resolution of 20 meters; in addition a LiDAR DTM with a horizontal resolution of 1 meter, which covers only the main hydrographic basins, is also available. The two DTMs have been transformed into the same reference frame. The cross-validation of the two datasets has been performed comparing the low resolution DTM with the local high resolution DTM. Then, where significant differences are present, GPS survey have been used as external validation. The results are presented. Moreover, a possible strategy for the future fusion of the data, is shortly summarized at the end of the paper.

  19. Estimating effectiveness of HPV vaccination against HPV infection from post-vaccination data in the absence of baseline data.

    PubMed

    Vänskä, Simopekka; Söderlund-Strand, Anna; Uhnoo, Ingrid; Lehtinen, Matti; Dillner, Joakim

    2018-04-28

    HPV vaccination programs have been introduced in large parts of the world, but monitoring of effectiveness is not routinely performed. Many countries introduced vaccination programs without establishing the baseline of HPV prevalences. We developed and validated methods to estimate protective effectiveness (PE) of vaccination from the post-vaccination data alone using references, which are invariant under HPV vaccination. Type-specific HPV prevalence data for 15-39 year-old women were collected from the pre- and post-vaccination era in a region in southern Sweden. In a region in middle Sweden, where no baseline data had been collected, only post-vaccination data was collected. The age-specific baseline prevalence of vaccine HPV types (vtHPV, HPV 6, 11, 16, 18) were reconstructed as Beta distributions from post-vaccination data by applying the reference odds ratios between the target HPV type and non-vaccine-type HPV (nvtHPV) prevalences. Older non-vaccinated age cohorts and the southern Sweden region were used as the references. The methods for baseline reconstructions were validated by computing the Bhattacharyya coefficient (BC), a measure for divergence, between reconstructed and actual observed prevalences for vaccine HPV types in Southern Sweden, and in addition, for non-vaccine types in both regions. The PE estimates among 18-21 year-old women were validated by comparing the PE estimates that were based on the reconstructed baseline prevalences against the PE estimates based on the actual baseline prevalences. In Southern Sweden the PEs against vtHPV were 52.2% (95% CI: 44.9-58.5) using the reconstructed baseline and 49.6% (43.2-55.5) using the actual baseline, with high BC 82.7% between the reconstructed and actual baseline. In the middle Sweden region where baseline data was missing, the PE was estimated at 40.5% (31.6-48.5). Protective effectiveness of HPV vaccination can be estimated from post-vaccination data alone via reconstructing the baseline using non-vaccine HPV type data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Validity of administrative data claim-based methods for identifying individuals with diabetes at a population level.

    PubMed

    Southern, Danielle A; Roberts, Barbara; Edwards, Alun; Dean, Stafford; Norton, Peter; Svenson, Lawrence W; Larsen, Erik; Sargious, Peter; Lau, David C W; Ghali, William A

    2010-01-01

    This study assessed the validity of a widely-accepted administrative data surveillance methodology for identifying individuals with diabetes relative to three laboratory data reference standard definitions for diabetes. We used a combination of linked regional data (hospital discharge abstracts and physician data) and laboratory data to test the validity of administrative data surveillance definitions for diabetes relative to a laboratory data reference standard. The administrative discharge data methodology includes two definitions for diabetes: a strict administrative data definition of one hospitalization code or two physician claims indicating diabetes; and a more liberal definition of one hospitalization code or a single physician claim. The laboratory data, meanwhile, produced three reference standard definitions based on glucose levels +/- HbA1c levels. Sensitivities ranged from 68.4% to 86.9% for the administrative data definitions tested relative to the three laboratory data reference standards. Sensitivities were higher for the more liberal administrative data definition. Positive predictive values (PPV), meanwhile, ranged from 53.0% to 88.3%, with the liberal administrative data definition producing lower PPVs. These findings demonstrate the trade-offs of sensitivity and PPV for selecting diabetes surveillance definitions. Centralized laboratory data may be of value to future surveillance initiatives that use combined data sources to optimize case detection.

  1. Local figure-ground cues are valid for natural images.

    PubMed

    Fowlkes, Charless C; Martin, David R; Malik, Jitendra

    2007-06-08

    Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.

  2. Assessment of gridded observations used for climate model validation in the Mediterranean region: the HyMeX and MED-CORDEX framework

    NASA Astrophysics Data System (ADS)

    Flaounas, Emmanouil; Drobinski, Philippe; Borga, Marco; Calvet, Jean-Christophe; Delrieu, Guy; Morin, Efrat; Tartari, Gianni; Toffolon, Roberta

    2012-06-01

    This letter assesses the quality of temperature and rainfall daily retrievals of the European Climate Assessment and Dataset (ECA&D) with respect to measurements collected locally in various parts of the Euro-Mediterranean region in the framework of the Hydrological Cycle in the Mediterranean Experiment (HyMeX), endorsed by the Global Energy and Water Cycle Experiment (GEWEX) of the World Climate Research Program (WCRP). The ECA&D, among other gridded datasets, is very often used as a reference for model calibration and evaluation. This is for instance the case in the context of the WCRP Coordinated Regional Downscaling Experiment (CORDEX) and its Mediterranean declination MED-CORDEX. This letter quantifies ECA&D dataset uncertainties associated with temperature and precipitation intra-seasonal variability, seasonal distribution and extremes. Our motivation is to help the interpretation of the results when validating or calibrating downscaling models by the ECA&D dataset in the context of regional climate research in the Euro-Mediterranean region.

  3. [Perception scales of validated food insecurity: the experience of the countries in Latin America and the Caribbean].

    PubMed

    Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza

    2018-02-01

    The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.

  4. New technology and regional studies in human ecology: A Papua New Guinea example

    NASA Technical Reports Server (NTRS)

    Morren, George E. B., Jr.

    1991-01-01

    Two key issues in using technologies such as digital image processing and geographic information systems are a conceptually and methodologically valid research design and the exploitation of varied sources of data. With this realized, the new technologies offer anthropologists the opportunity to test hypotheses about spatial and temporal variations in the features of interest within a regionally coherent mosaic of social groups and landscapes. Current research on the Mountain OK of Papua New Guinea is described with reference to these issues.

  5. New Formulation for the Viscosity of n-Butane

    NASA Astrophysics Data System (ADS)

    Herrmann, Sebastian; Vogel, Eckhard

    2018-03-01

    A new viscosity formulation for n-butane, based on the residual quantity concept, uses the reference equation of state by Bücker and Wagner [J. Phys. Chem. Ref. Data 35, 929 (2006)] and is valid in the fluid region from the triple point to 650 K and to 100 MPa. The contributions for the zero-density viscosity and for the initial-density dependence were separately developed, whereas those for the critical enhancement and for the higher-density terms were pretreated. All contributions were given as a function of the reciprocal reduced temperature τ, while the last two contributions were correlated as a function of τ and of the reduced density δ. The different contributions were based on specific primary data sets, whose evaluation and choice were discussed in detail. The final formulation incorporates 13 coefficients derived employing a state-of-the-art linear optimization algorithm. The viscosity at low pressures p ≤ 0.2 MPa is described with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 293 ≤ T/K ≤ 626. The expanded uncertainty in the vapor phase at subcritical temperatures T ≥ 298 K as well as in the supercritical thermodynamic region T ≤ 448 K at pressures p ≤ 30 MPa is estimated to be 1.5%. It is raised to 4.0% in regions where only less reliable primary data sets are available and to 6.0% in ranges without any primary data, but in which the equation of state is valid. A weakness of the reference equation of state in the near-critical region prevents estimation of the expanded uncertainty in this region. Viscosity tables for the new formulation are presented in Appendix B for the single-phase region, for the vapor-liquid phase boundary, and for the near-critical region.

  6. Establishment and validation of analytical reference panels for the standardization of quantitative BCR-ABL1 measurements on the international scale.

    PubMed

    White, Helen E; Hedges, John; Bendit, Israel; Branford, Susan; Colomer, Dolors; Hochhaus, Andreas; Hughes, Timothy; Kamel-Reid, Suzanne; Kim, Dong-Wook; Modur, Vijay; Müller, Martin C; Pagnano, Katia B; Pane, Fabrizio; Radich, Jerry; Cross, Nicholas C P; Labourier, Emmanuel

    2013-06-01

    Current guidelines for managing Philadelphia-positive chronic myeloid leukemia include monitoring the expression of the BCR-ABL1 (breakpoint cluster region/c-abl oncogene 1, non-receptor tyrosine kinase) fusion gene by quantitative reverse-transcription PCR (RT-qPCR). Our goal was to establish and validate reference panels to mitigate the interlaboratory imprecision of quantitative BCR-ABL1 measurements and to facilitate global standardization on the international scale (IS). Four-level secondary reference panels were manufactured under controlled and validated processes with synthetic Armored RNA Quant molecules (Asuragen) calibrated to reference standards from the WHO and the NIST. Performance was evaluated in IS reference laboratories and with non-IS-standardized RT-qPCR methods. For most methods, percent ratios for BCR-ABL1 e13a2 and e14a2 relative to ABL1 or BCR were robust at 4 different levels and linear over 3 logarithms, from 10% to 0.01% on the IS. The intraassay and interassay imprecision was <2-fold overall. Performance was stable across 3 consecutive lots, in multiple laboratories, and over a period of 18 months to date. International field trials demonstrated the commutability of the reagents and their accurate alignment to the IS within the intra- and interlaboratory imprecision of IS-standardized methods. The synthetic calibrator panels are robust, reproducibly manufactured, analytically calibrated to the WHO primary standards, and compatible with most BCR-ABL1 RT-qPCR assay designs. The broad availability of secondary reference reagents will further facilitate interlaboratory comparative studies and independent quality assessment programs, which are of paramount importance for worldwide standardization of BCR-ABL1 monitoring results and the optimization of current and new therapeutic approaches for chronic myeloid leukemia. © 2013 American Association for Clinical Chemistry.

  7. The contribution of CEOP data to the understanding and modeling of monsoon systems

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.

    2005-01-01

    CEOP has contributed and will continue to provide integrated data sets from diverse platforms for better understanding of the water and energy cycles, and for validating models. In this talk, I will show examples of how CEOP has contributed to the formulation of a strategy for the study of the monsoon as a system. The CEOP data concept has led to the development of the CEOP Inter-Monsoon Studies (CIMS), which focuses on the identification of model bias, and improvement of model physics such as the diurnal and annual cycles. A multi-model validation project focusing on diurnal variability of the East Asian monsoon, and using CEOP reference site data, as well as CEOP integrated satellite data is now ongoing. Similar validation projects in other monsoon regions are being started. Preliminary studies show that climate models have difficulties in simulating the diurnal signals of total rainfall, rainfall intensity and frequency of occurrence, which have different peak hours, depending on locations. Further more model diurnal cycle of rainfall in monsoon regions tend to lead the observed by about 2-3 hours. These model bias offer insight into lack of, or poor representation of key components of the convective,and stratiform rainfall. The CEOP data also stimulated studies to compare and contrasts monsoon variability in different parts of the world. It was found that seasonal wind reversal, orographic effects, monsoon depressions, meso-scale convective complexes, SST and land surface land influences are common features in all monsoon regions. Strong intraseasonal variability is present in all monsoon regions. While there is a clear demarcation of onset, breaks and withdrawal in the Asian and Australian monsoon region associated with climatological intraseasonal variability, it is less clear in the American and Africa monsoon regions. The examination of satellite and reference site data in monsoon has led to preliminary model experiments to study the impact of aerosol on monsoon variability. I will show examples of how the study of the dynamics of aerosol-water cycle interactions in the monsoon region, can be best achieved using the CEOP data and modeling strategy.

  8. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  9. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  10. Selection of appropriate reference genes for RT-qPCR analysis in a streptozotocin-induced Alzheimer's disease model of cynomolgus monkeys (Macaca fascicularis).

    PubMed

    Park, Sang-Je; Kim, Young-Hyun; Lee, Youngjeon; Kim, Kyoung-Min; Kim, Heui-Soo; Lee, Sang-Rae; Kim, Sun-Uk; Kim, Sang-Hyun; Kim, Ji-Su; Jeong, Kang-Jin; Lee, Kyoung-Min; Huh, Jae-Won; Chang, Kyu-Tae

    2013-01-01

    Reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) has been widely used to quantify relative gene expression because of the specificity, sensitivity, and accuracy of this technique. In order to obtain reliable gene expression data from RT-qPCR experiments, it is important to utilize optimal reference genes for the normalization of target gene expression under varied experimental conditions. Previously, we developed and validated a novel icv-STZ cynomolgus monkey model for Alzheimer's disease (AD) research. However, in order to enhance the reliability of this disease model, appropriate reference genes must be selected to allow meaningful analysis of the gene expression levels in the icv-STZ cynomolgus monkey brain. In this study, we assessed the expression stability of 9 candidate reference genes in 2 matched-pair brain samples (5 regions) of control cynomolgus monkeys and those who had received intracerebroventricular injection of streptozotocin (icv-STZ). Three well-known analytical programs geNorm, NormFinder, and BestKeeper were used to choose the suitable reference genes from the total sample group, control group, and icv-STZ group. Combination analysis of the 3 different programs clearly indicated that the ideal reference genes are RPS19 and YWHAZ in the total sample group, GAPDH and RPS19 in the control group, and ACTB and GAPDH in the icv-STZ group. Additionally, we validated the normalization accuracy of the most appropriate reference genes (RPS19 and YWHAZ) by comparison with the least stable gene (TBP) using quantification of the APP and MAPT genes in the total sample group. To the best of our knowledge, this research is the first study to identify and validate the appropriate reference genes in cynomolgus monkey brains. These findings provide useful information for future studies involving the expression of target genes in the cynomolgus monkey.

  11. A comprehensive profile of DNA copy number variations in a Korean population: identification of copy number invariant regions among Koreans.

    PubMed

    Jeon, Jae Pil; Shim, Sung Mi; Jung, Jong Sun; Nam, Hye Young; Lee, Hye Jin; Oh, Berm Seok; Kim, Kuchan; Kim, Hyung Lae; Han, Bok Ghee

    2009-09-30

    To examine copy number variations among the Korean population, we compared individual genomes with the Korean reference genome assembly using the publicly available Korean HapMap SNP 50 k chip data from 90 individuals. Korean individuals exhibited 123 copy number variation regions (CNVRs) covering 27.2 mb, equivalent to 1.0% of the genome in the copy number variation (CNV) analysis using the combined criteria of P value (P<0.01) and standard deviation of copy numbers (SD>or= 0.25) among study subjects. In contrast, when compared to the Affymetrix reference genome assembly from multiple ethnic groups, considerably more CNVRs (n=643) were detected in larger proportions (5.0%) of the genome covering 135.1 mb even by more stringent criteria (P<0.001 and SD>or=0.25), reflecting ethnic diversity of structural variations between Korean and other populations. Some CNVRs were validated by the quantitative multiplex PCR of short fluorescent fragment (QMPSF) method, and then copy number invariant regions were detected among the study subjects. These copy number invariant regions would be used as good internal controls for further CNV studies. Lastly, we demonstrated that the CNV information could stratify even a single ethnic population with a proper reference genome assembly from multiple heterogeneous populations.

  12. Using CRANID to test the population affinity of known crania.

    PubMed

    Kallenberger, Lauren; Pilbrow, Varsha

    2012-11-01

    CRANID is a statistical program used to infer the source population of a cranium of unknown origin by comparing its cranial dimensions with a worldwide craniometric database. It has great potential for estimating ancestry in archaeological, forensic and repatriation cases. In this paper we test the validity of CRANID in classifying crania of known geographic origin. Twenty-three crania of known geographic origin but unknown sex were selected from the osteological collections of the University of Melbourne. Only 18 crania showed good statistical match with the CRANID database. Without considering accuracy of sex allocation, 11 crania were accurately classified into major geographic regions and nine were correctly classified to geographically closest available reference populations. Four of the five crania with poor statistical match were nonetheless correctly allocated to major geographical regions, although none was accurately assigned to geographically closest reference samples. We conclude that if sex allocations are overlooked, CRANID can accurately assign 39% of specimens to geographically closest matching reference samples and 48% to major geographic regions. Better source population representation may improve goodness of fit, but known sex-differentiated samples are needed to further test the utility of CRANID. © 2012 The Authors Journal of Anatomy © 2012 Anatomical Society.

  13. Dried blood spot testing for seven steroids using liquid chromatography-tandem mass spectrometry with reference interval determination in the Korean population.

    PubMed

    Kim, Borahm; Lee, Mi Na; Park, Hyung Doo; Kim, Jong Won; Chang, Yun Sil; Park, Won Soon; Lee, Soo Youn

    2015-11-01

    Conventional screening for congenital adrenal hyperplasia (CAH) using immunoassays generates a large number of false-positive results. A more specific liquid chromatography-tandem mass spectrometry (LC-MS/MS) method has been introduced to minimize unnecessary follow-ups. However, because of limited data on its use in the Korean population, LC-MS/MS has not yet been incorporated into newborn screening programs in this region. The present study aims to develop and validate an LC-MS/MS method for the simultaneous determination of seven steroids in dried blood spots (DBS) for CAH screening, and to define age-specific reference intervals in the Korean population. We developed and validated an LC-MS/MS method to determine the reference intervals of cortisol, 17-hydroxyprogesterone, 11-deoxycortisol, 21-deoxycortisol, androstenedione, corticosterone, and 11-deoxycorticosterone simultaneously in 453 DBS samples. The samples were from Korean subjects stratified by age group (78 full-term neonates, 76 premature neonates, 89 children, and 100 adults). The accuracy, precision, matrix effects, and extraction recovery were satisfactory for all the steroids at three concentrations; values of intra- and inter-day precision coefficients of variance, bias, and recovery were 0.7-7.7%, -1.5-9.8%, and 49.3-97.5%, respectively. The linearity range was 1-100 ng/mL for cortisol and 0.5-50 ng/mL for other steroids (R²>0.99). The reference intervals were in agreement with the previous reports. This LC-MS/MS method and the reference intervals validated in the Korean population can be successfully applied to analyze seven steroids in DBS for the diagnosis of CAH.

  14. The Predictive Validity of Teacher Candidate Letters of Reference

    ERIC Educational Resources Information Center

    Mason, Richard W.; Schroeder, Mark P.

    2014-01-01

    Letters of reference are widely used as an essential part of the hiring process of newly licensed teachers. While the predictive validity of these letters of reference has been called into question it has never been empirically studied. The current study examined the predictive validity of the quality of letters of reference for forty-one student…

  15. Computer-Aided Classification of Visual Ventilation Patterns in Patients with Chronic Obstructive Pulmonary Disease at Two-Phase Xenon-Enhanced CT

    PubMed Central

    Yoon, Soon Ho; Jung, Julip; Hong, Helen; Park, Eun Ah; Lee, Chang Hyun; Lee, Youkyung; Jin, Kwang Nam; Choo, Ji Yung; Lee, Nyoung Keun

    2014-01-01

    Objective To evaluate the technical feasibility, performance, and interobserver agreement of a computer-aided classification (CAC) system for regional ventilation at two-phase xenon-enhanced CT in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods Thirty-eight patients with COPD underwent two-phase xenon ventilation CT with resulting wash-in (WI) and wash-out (WO) xenon images. The regional ventilation in structural abnormalities was visually categorized into four patterns by consensus of two experienced radiologists who compared the xenon attenuation of structural abnormalities with that of adjacent normal parenchyma in the WI and WO images, and it served as the reference. Two series of image datasets of structural abnormalities were randomly extracted for optimization and validation. The proportion of agreement on a per-lesion basis and receiver operating characteristics on a per-pixel basis between CAC and reference were analyzed for optimization. Thereafter, six readers independently categorized the regional ventilation in structural abnormalities in the validation set without and with a CAC map. Interobserver agreement was also compared between assessments without and with CAC maps using multirater κ statistics. Results Computer-aided classification maps were successfully generated in 31 patients (81.5%). The proportion of agreement and the average area under the curve of optimized CAC maps were 94% (75/80) and 0.994, respectively. Multirater κ value was improved from moderate (κ = 0.59; 95% confidence interval [CI], 0.56-0.62) at the initial assessment to excellent (κ = 0.82; 95% CI, 0.79-0.85) with the CAC map. Conclusion Our proposed CAC system demonstrated the potential for regional ventilation pattern analysis and enhanced interobserver agreement on visual classification of regional ventilation. PMID:24843245

  16. Computer-aided classification of visual ventilation patterns in patients with chronic obstructive pulmonary disease at two-phase xenon-enhanced CT.

    PubMed

    Yoon, Soon Ho; Goo, Jin Mo; Jung, Julip; Hong, Helen; Park, Eun Ah; Lee, Chang Hyun; Lee, Youkyung; Jin, Kwang Nam; Choo, Ji Yung; Lee, Nyoung Keun

    2014-01-01

    To evaluate the technical feasibility, performance, and interobserver agreement of a computer-aided classification (CAC) system for regional ventilation at two-phase xenon-enhanced CT in patients with chronic obstructive pulmonary disease (COPD). Thirty-eight patients with COPD underwent two-phase xenon ventilation CT with resulting wash-in (WI) and wash-out (WO) xenon images. The regional ventilation in structural abnormalities was visually categorized into four patterns by consensus of two experienced radiologists who compared the xenon attenuation of structural abnormalities with that of adjacent normal parenchyma in the WI and WO images, and it served as the reference. Two series of image datasets of structural abnormalities were randomly extracted for optimization and validation. The proportion of agreement on a per-lesion basis and receiver operating characteristics on a per-pixel basis between CAC and reference were analyzed for optimization. Thereafter, six readers independently categorized the regional ventilation in structural abnormalities in the validation set without and with a CAC map. Interobserver agreement was also compared between assessments without and with CAC maps using multirater κ statistics. Computer-aided classification maps were successfully generated in 31 patients (81.5%). The proportion of agreement and the average area under the curve of optimized CAC maps were 94% (75/80) and 0.994, respectively. Multirater κ value was improved from moderate (κ = 0.59; 95% confidence interval [CI], 0.56-0.62) at the initial assessment to excellent (κ = 0.82; 95% CI, 0.79-0.85) with the CAC map. Our proposed CAC system demonstrated the potential for regional ventilation pattern analysis and enhanced interobserver agreement on visual classification of regional ventilation.

  17. Using Ground-Based Measurements and Retrievals to Validate Satellite Data

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan

    2002-01-01

    The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.

  18. Calibration test of the temperature and strain sensitivity coefficient in regional reference grating method

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo

    2014-12-01

    In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.

  19. Measuring specific receptor binding of a PET radioligand in human brain without pharmacological blockade: The genomic plot.

    PubMed

    Veronese, Mattia; Zanotti-Fregonara, Paolo; Rizzo, Gaia; Bertoldo, Alessandra; Innis, Robert B; Turkheimer, Federico E

    2016-04-15

    PET studies allow in vivo imaging of the density of brain receptor species. The PET signal, however, is the sum of the fraction of radioligand that is specifically bound to the target receptor and the non-displaceable fraction (i.e. the non-specifically bound radioligand plus the free ligand in tissue). Therefore, measuring the non-displaceable fraction, which is generally assumed to be constant across the brain, is a necessary step to obtain regional estimates of the specific fractions. The nondisplaceable binding can be directly measured if a reference region, i.e. a region devoid of any specific binding, is available. Many receptors are however widely expressed across the brain, and a true reference region is rarely available. In these cases, the nonspecific binding can be obtained after competitive pharmacological blockade, which is often contraindicated in humans. In this work we introduce the genomic plot for estimating the nondisplaceable fraction using baseline scans only. The genomic plot is a transformation of the Lassen graphical method in which the brain maps of mRNA transcripts of the target receptor obtained from the Allen brain atlas are used as a surrogate measure of the specific binding. Thus, the genomic plot allows the calculation of the specific and nondisplaceable components of radioligand uptake without the need of pharmacological blockade. We first assessed the statistical properties of the method with computer simulations. Then we sought ground-truth validation using human PET datasets of seven different neuroreceptor radioligands, where nonspecific fractions were either obtained separately using drug displacement or available from a true reference region. The population nondisplaceable fractions estimated by the genomic plot were very close to those measured by actual human blocking studies (mean relative difference between 2% and 7%). However, these estimates were valid only when mRNA expressions were predictive of protein levels (i.e. there were no significant post-transcriptional changes). This condition can be readily established a priori by assessing the correlation between PET and mRNA expression. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Measuring specific receptor binding of a PET radioligand in human brain without pharmacological blockade: The genomic plot

    PubMed Central

    Veronese, Mattia; Zanotti-Fregonara, Paolo; Rizzo, Gaia; Bertoldo, Alessandra; Innis, Robert B.; Turkheimer, Federico E.

    2016-01-01

    PET studies allow in vivo imaging of the density of brain receptor species. The PET signal, however, is the sum of the fraction of radioligand that is specifically bound to the target receptor and the non-displaceable fraction (i.e. the non-specifically bound radioligand plus the free ligand in tissue). Therefore, measuring the non-displaceable fraction, which is generally assumed to be constant across the brain, is a necessary step to obtain regional estimates of the specific fractions. The nondisplaceable binding can be directly measured if a reference region, i.e. a region devoid of any specific binding, is available. Many receptors are however widely expressed across the brain, and a true reference region is rarely available. In these cases, the nonspecific binding can be obtained after competitive pharmacological blockade, which is often contraindicated in humans. In this work we introduce the genomic plot for estimating the nondisplaceable fraction using baseline scans only. The genomic plot is a transformation of the Lassen graphical method in which the brain maps of mRNA transcripts of the target receptor obtained from the Allen brain atlas are used as a surrogate measure of the specific binding. Thus, the genomic plot allows the calculation of the specific and nondisplaceable components of radioligand uptake without the need of pharmacological blockade. We first assessed the statistical properties of the method with computer simulations. Then we sought ground-truth validation using human PET datasets of seven different neuroreceptor radioligands, where nonspecific fractions were either obtained separately using drug displacement or available from a true reference region. The population nondisplaceable fractions estimated by the genomic plot were very close to those measured by actual human blocking studies (mean relative difference between 2% and 7%). However, these estimates were valid only when mRNA expressions were predictive of protein levels (i.e. there were no significant post-transcriptional changes). This condition can be readily established a priori by assessing the correlation between PET and mRNA expression. PMID:26850512

  1. Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT

    PubMed Central

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2014-01-01

    Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354

  2. Computer-aided assessment of regional abdominal fat with food residue removal in CT.

    PubMed

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2013-11-01

    Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.

  3. Predicting skeletal muscle mass from dual-energy X-ray absorptiometry in Japanese prepubertal children.

    PubMed

    Midorikawa, T; Ohta, M; Hikihara, Y; Torii, S; Sakamoto, S

    2017-10-01

    We aimed to develop regression-based prediction equations for estimating total and regional skeletal muscle mass (SMM) from measurements of lean soft tissue mass (LSTM) using dual-energy X-ray absorptiometry (DXA) and investigate the validity of these equations. In total, 144 healthy Japanese prepubertal children aged 6-12 years were divided into 2 groups: the model development group (62 boys and 38 girls) and the validation group (26 boys and 18 girls). Contiguous MRI images with a 1-cm slice thickness were obtained from the first cervical vertebra to the ankle joints as reference data. The SMM was calculated from the summation of the digitized cross-sectional areas. Total and regional LSTM was measured using DXA. Strong significant correlations were observed between the site-matched SMM (total, arms, trunk and legs) measured by MRI and the LSTM obtained by DXA in the model development group for both boys and girls (R 2 adj =0.86-0.97, P<0.01, standard error of the estimate (SEE)=0.08-0.44 kg). When these SMM prediction equations were applied to the validation group, the measured total (boys 9.47±2.21 kg; girls 8.18±2.62 kg) and regional SMM were very similar to the predicted values for both boys (total SMM 9.40±2.39 kg) and girls (total SMM 8.17±2.57 kg). The results of the Bland-Altman analysis for the validation group did not indicate any bias for either boys or girls with the exception of the arm region for the girls. These results suggest that the DXA-derived prediction equations are precise and accurate for the estimation of total and regional SMM in Japanese prepubertal boys and girls.

  4. Validation of geometric accuracy of Global Land Survey (GLS) 2000 data

    USGS Publications Warehouse

    Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.

    2015-01-01

    The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.

  5. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  6. Global methylmercury exposure from seafood consumption and risk of developmental neurotoxicity: a systematic review

    PubMed Central

    Burke, Thomas A; Navas-Acien, Ana; Breysse, Patrick N; McGready, John; Fox, Mary A

    2014-01-01

    Abstract Objective To examine biomarkers of methylmercury (MeHg) intake in women and infants from seafood-consuming populations globally and characterize the comparative risk of fetal developmental neurotoxicity. Methods A search was conducted of the published literature reporting total mercury (Hg) in hair and blood in women and infants. These biomarkers are validated proxy measures of MeHg, a neurotoxin found primarily in seafood. Average and high-end biomarkers were extracted, stratified by seafood consumption context, and pooled by category. Medians for average and high-end pooled distributions were compared with the reference level established by a joint expert committee of the Food and Agriculture Organization (FAO) and the World Health Organization (WHO). Findings Selection criteria were met by 164 studies of women and infants from 43 countries. Pooled average biomarkers suggest an intake of MeHg several times over the FAO/WHO reference in fish-consuming riparians living near small-scale gold mining and well over the reference in consumers of marine mammals in Arctic regions. In coastal regions of south-eastern Asia, the western Pacific and the Mediterranean, average biomarkers approach the reference. Although the two former groups have a higher risk of neurotoxicity than the latter, coastal regions are home to the largest number at risk. High-end biomarkers across all categories indicate MeHg intake is in excess of the reference value. Conclusion There is a need for policies to reduce Hg exposure among women and infants and for surveillance in high-risk populations, the majority of which live in low-and middle-income countries. PMID:24700993

  7. The Release 6 reference sequence of the Drosophila melanogaster genome

    DOE PAGES

    Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.; ...

    2015-01-14

    Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less

  8. The Release 6 reference sequence of the Drosophila melanogaster genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.

    Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less

  9. Global methylmercury exposure from seafood consumption and risk of developmental neurotoxicity: a systematic review.

    PubMed

    Sheehan, Mary C; Burke, Thomas A; Navas-Acien, Ana; Breysse, Patrick N; McGready, John; Fox, Mary A

    2014-04-01

    To examine biomarkers of methylmercury (MeHg) intake in women and infants from seafood-consuming populations globally and characterize the comparative risk of fetal developmental neurotoxicity. A search was conducted of the published literature reporting total mercury (Hg) in hair and blood in women and infants. These biomarkers are validated proxy measures of MeHg, a neurotoxin found primarily in seafood. Average and high-end biomarkers were extracted, stratified by seafood consumption context, and pooled by category. Medians for average and high-end pooled distributions were compared with the reference level established by a joint expert committee of the Food and Agriculture Organization (FAO) and the World Health Organization (WHO). Selection criteria were met by 164 studies of women and infants from 43 countries. Pooled average biomarkers suggest an intake of MeHg several times over the FAO/WHO reference in fish-consuming riparians living near small-scale gold mining and well over the reference in consumers of marine mammals in Arctic regions. In coastal regions of south-eastern Asia, the western Pacific and the Mediterranean, average biomarkers approach the reference. Although the two former groups have a higher risk of neurotoxicity than the latter, coastal regions are home to the largest number at risk. High-end biomarkers across all categories indicate MeHg intake is in excess of the reference value. There is a need for policies to reduce Hg exposure among women and infants and for surveillance in high-risk populations, the majority of which live in low-and middle-income countries.

  10. Validation of psychoanalytic theories: towards a conceptualization of references.

    PubMed

    Zachrisson, Anders; Zachrisson, Henrik Daae

    2005-10-01

    The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.

  11. Robust Ultraviolet-Visible (UV-Vis) Partial Least-Squares (PLS) Models for Tannin Quantification in Red Wine.

    PubMed

    Aleixandre-Tudo, José Luis; Nieuwoudt, Helené; Aleixandre, José Luis; Du Toit, Wessel J

    2015-02-04

    The validation of ultraviolet-visible (UV-vis) spectroscopy combined with partial least-squares (PLS) regression to quantify red wine tannins is reported. The methylcellulose precipitable (MCP) tannin assay and the bovine serum albumin (BSA) tannin assay were used as reference methods. To take the high variability of wine tannins into account when the calibration models were built, a diverse data set was collected from samples of South African red wines that consisted of 18 different cultivars, from regions spanning the wine grape-growing areas of South Africa with their various sites, climates, and soils, ranging in vintage from 2000 to 2012. A total of 240 wine samples were analyzed, and these were divided into a calibration set (n = 120) and a validation set (n = 120) to evaluate the predictive ability of the models. To test the robustness of the PLS calibration models, the predictive ability of the classifying variables cultivar, vintage year, and experimental versus commercial wines was also tested. In general, the statistics obtained when BSA was used as a reference method were slightly better than those obtained with MCP. Despite this, the MCP tannin assay should also be considered as a valid reference method for developing PLS calibrations. The best calibration statistics for the prediction of new samples were coefficient of correlation (R 2 val) = 0.89, root mean standard error of prediction (RMSEP) = 0.16, and residual predictive deviation (RPD) = 3.49 for MCP and R 2 val = 0.93, RMSEP = 0.08, and RPD = 4.07 for BSA, when only the UV region (260-310 nm) was selected, which also led to a faster analysis time. In addition, a difference in the results obtained when the predictive ability of the classifying variables vintage, cultivar, or commercial versus experimental wines was studied suggests that tannin composition is highly affected by many factors. This study also discusses the correlations in tannin values between the methylcellulose and protein precipitation methods.

  12. Film excerpts shown to specifically elicit various affects lead to overlapping activation foci in a large set of symmetrical brain regions in males.

    PubMed

    Karama, Sherif; Armony, Jorge; Beauregard, Mario

    2011-01-01

    While the limbic system theory continues to be part of common scientific parlance, its validity has been questioned on multiple grounds. Nonetheless, the issue of whether or not there exists a set of brain areas preferentially dedicated to emotional processing remains central within affective neuroscience. Recently, a widespread neural reference space for emotion which includes limbic as well as other regions was characterized in a large meta-analysis. As methodologically heterogeneous studies go into such meta-analyses, showing in an individual study in which all parameters are kept constant, the involvement of overlapping areas for various emotion conditions in keeping with the neural reference space for emotion, would serve as valuable confirmatory evidence. Here, using fMRI, 20 young adult men were scanned while viewing validated neutral and effective emotion-eliciting short film excerpts shown to quickly and specifically elicit disgust, amusement, or sexual arousal. Each emotion-specific run included, in random order, multiple neutral and emotion condition blocks. A stringent conjunction analysis revealed a large overlap across emotion conditions that fit remarkably well with the neural reference space for emotion. This overlap included symmetrical bilateral activation of the medial prefrontal cortex, the anterior cingulate, the temporo-occipital junction, the basal ganglia, the brainstem, the amygdala, the hippocampus, the thalamus, the subthalamic nucleus, the posterior hypothalamus, the cerebellum, as well as the frontal operculum extending towards the anterior insula. This study clearly confirms for the visual modality, that processing emotional stimuli leads to widespread increases in activation that cluster within relatively confined areas, regardless of valence.

  13. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  14. Self-administered structured food record for measuring individual energy and nutrient intake in large cohorts: Design and validation.

    PubMed

    García, Silvia M; González, Claudio; Rucci, Enzo; Ambrosino, Cintia; Vidal, Julia; Fantuzzi, Gabriel; Prestes, Mariana; Kronsbein, Peter

    2018-06-05

    Several instruments developed to assess dietary intake of groups or populations have strengths and weaknesses that affect their specific application. No self-administered, closed-ended dietary survey was previously used in Argentina to assess current food and nutrient intake on a daily basis. To design and validate a self-administered, structured food record (NutriQuid, NQ) representative of the adult Argentine population's food consumption pattern to measure individual energy and nutrient intake. Records were loaded onto a database using software that checks a regional nutrition information system (SARA program), automatically quantifying energy and nutrient intake. NQ validation included two phases: (1) NQ construct validity comparing records kept simultaneously by healthy volunteers (45-75 years) and a nutritionist who provided meals (reference), and (2) verification of whether NQ reflected target population consumption (calories and nutrients), week consumption differences, respondent acceptability, and ease of data entry/analysis. Data analysis included descriptive statistics, repeated measures ANOVA, intraclass correlation coefficient, nonparametric regression, and cross-classification into quintiles. The first validation (study group vs. reference) showed an underestimation (10%) of carbohydrate, fat, and energy intake. Second validation: 109 volunteers (91% response) completed the NQ for seven consecutive days. Record completion took about 9min/day, and data entry 3-6min. Mean calorie intake was 2240±119kcal/day (42% carbohydrates, 17% protein, and 41% fat). Intake significantly increased in the weekend. NQ is a simple and efficient tool to assess dietary intake in large samples. Copyright © 2018 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Validation of the SMOS-MIRAS Soil Moisture Product (SML2UDP) in the Pampean Region of Argentina

    NASA Astrophysics Data System (ADS)

    Niclòs, Raquel; Rivas, Raúl; Sánchez, Juan Manuel; García-Santos, Vicente; Doña, Carolina; Valor, Enric; Holzman, Mauro; Bayala, Martín Ignacio; Carmona, Facundo; Ocampo, Dora; Soldano, Alvaro; Thibeault, Marc

    2014-05-01

    A validation campaign was carried out to evaluate the SMOS-MIRAS Soil Moisture (SM) SML2UDP product (v5.51) in the Pampean Region of Argentina on February 2013. The study area was selected because it is a vast area of flatlands containing quite homogeneous rainfed croplands, with prevalence of soybean crops, considered SMOS nominal land uses (i.e., crops with vegetation heights not exceeding 1 to 2 m by opposition to trees). Transects of ground SM measurements were collected by Delta-T ThetaProbe ML2x SM probes within four ISEA-4H9 DGG SMOS nodes. The SM data obtained by each probe transect in each parcel were checked by collecting soil samples in the same parcels at the same time and measuring their masses. The gravimetric method was used to obtain reference values. An uncertainty of ± 0.03 m3m-3 was obtained for the ML2x probes. Additionally, they were calibrated in the laboratory for different SMs by saturating and drying a specific and representative variety of soil samples collected from the experimental parcels (loam, clay loam and silt loam samples). This calibration showed again accurate operations for the ML2x probes, which even attain uncertainties of ±0.01 m3m-3, in agreement with the manufacturer. The comparison of the SM transect data collected during the campaign with the SMOS-MIRAS SML2UDP product values showed a negative bias between concurrent SMOS data and ground SM measurements, which means a slight SMOS-MIRAS underestimation, and a standard deviation of ± 0.06 m3m-3. The validation sites were selected taking as reference the locations of permanent SM stations property of the Argentinean Comisión Nacional de Actividades Espaciales (CONAE, National Commission of Space Activities), Instituto Nacional de Tecnología Agropecuaria (INTA, National Institute of Farming Technology) and Instituto de Hidrología de Llanuras (IHLLA, Plain Hydrology Institute). During the campaign several transects were carried out in the parcels where permanent SM stations were located, mainly in those within one of the nodes (with 5 stations inside). The objective was to evaluate the station SM data reliability at the SMOS spatial resolution with the aim of using station data series as reference for SMOS-MIRAS SM product validations. A linear correlation was obtained between the ground SM values and the SM station data within the node, with a coefficient of determination of 0.98 and a fitting error of ± 0.010 m3m-3. Therefore, the station data adjusted to obtain node representative values are being evaluated as reference data to extend the validation of SMOS-retrieved data beyond the campaign results.

  16. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  17. A content validity study of signs, symptoms and diseases/health problems expressed in LIBRAS1

    PubMed Central

    Aragão, Jamilly da Silva; de França, Inacia Sátiro Xavier; Coura, Alexsandro Silva; de Sousa, Francisco Stélio; Batista, Joana D'arc Lyra; Magalhães, Isabella Medeiros de Oliveira

    2015-01-01

    Objectives: to validate the content of signs, symptoms and diseases/health problems expressed in LIBRAS for people with deafness Method: methodological development study, which involved 36 people with deafness and three LIBRAS specialists. The study was conducted in three stages: investigation of the signs, symptoms and diseases/health problems, referred to by people with deafness, reported in a questionnaire; video recordings of how people with deafness express, through LIBRA, the signs, symptoms and diseases/health problems; and validation of the contents of the recordings of the expressions by LIBRAS specialists. Data were processed in a spreadsheet and analyzed using univariate tables, with absolute frequencies and percentages. The validation results were analyzed using the Content Validity Index (CVI). Results: 33 expressions in LIBRAS, of signs, symptoms and diseases/health problems were evaluated, and 28 expressions obtained a satisfactory CVI (1.00). Conclusions: the signs, symptoms and diseases/health problems expressed in LIBRAS presented validity, in the study region, for health professionals, especially nurses, for use in the clinical anamnesis of the nursing consultation for people with deafness. PMID:26625991

  18. A content validity study of signs, symptoms and diseases/health problems expressed in LIBRAS.

    PubMed

    Aragão, Jamilly da Silva; de França, Inacia Sátiro Xavier; Coura, Alexsandro Silva; de Sousa, Francisco Stélio; Batista, Joana D'arc Lyra; Magalhães, Isabella Medeiros de Oliveira

    2015-01-01

    To validate the content of signs, symptoms and diseases/health problems expressed in LIBRAS for people with deafness. Method: Methodological development study, which involved 36 people with deafness and three LIBRAS specialists. The study was conducted in three stages: investigation of the signs, symptoms and diseases/health problems, referred to by people with deafness, reported in a questionnaire; video recordings of how people with deafness express, through LIBRA, the signs, symptoms and diseases/health problems; and validation of the contents of the recordings of the expressions by LIBRAS specialists. Data were processed in a spreadsheet and analyzed using univariate tables, with absolute frequencies and percentages. The validation results were analyzed using the Content Validity Index (CVI). 33 expressions in LIBRAS, of signs, symptoms and diseases/health problems were evaluated, and 28 expressions obtained a satisfactory CVI (1.00). The signs, symptoms and diseases/health problems expressed in LIBRAS presented validity, in the study region, for health professionals, especially nurses, for use in the clinical anamnesis of the nursing consultation for people with deafness.

  19. Theoretical studies of floating-reference method for NIR blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Shi, Zhenzhi; Yang, Yue; Zhao, Huijuan; Chen, Wenliang; Liu, Rong; Xu, Kexin

    2011-03-01

    Non-invasive blood glucose monitoring using NIR light has been suffered from the variety of optical background that is mainly caused by the change of human body, such as the change of temperature, water concentration, and so on. In order to eliminate these internal influence and external interference a so called floating-reference method has been proposed to provide an internal reference. From the analysis of the diffuse reflectance spectrum, a position has been found where diffuse reflection of light is not sensitive to the glucose concentrations. Our previous work has proved the existence of reference position using diffusion equation. However, since glucose monitoring generally use the NIR light in region of 1000-2000nm, diffusion equation is not valid because of the high absorption coefficient and small source-detector separations. In this paper, steady-state high-order approximate model is used to further investigate the existence of the floating reference position in semi-infinite medium. Based on the analysis of different optical parameters on the impact of spatially resolved reflectance of light, we find that the existence of the floating-reference position is the result of the interaction of optical parameters. Comparing to the results of Monte Carlo simulation, the applicable region of diffusion approximation and higher-order approximation for the calculation of floating-reference position is discussed at the wavelength of 1000nm-1800nm, using the intralipid solution of different concentrations. The results indicate that when the reduced albedo is greater than 0.93, diffusion approximation results are more close to simulation results, otherwise the high order approximation is more applicable.

  20. Intercomparison and validation of MODIS and GLASS leaf area index (LAI) products over mountain areas: A case study in southwestern China

    NASA Astrophysics Data System (ADS)

    Jin, Huaan; Li, Ainong; Bian, Jinhu; Nan, Xi; Zhao, Wei; Zhang, Zhengjian; Yin, Gaofei

    2017-03-01

    The validation study of leaf area index (LAI) products over rugged surfaces not only gives additional insights into data quality of LAI products, but deepens understanding of uncertainties regarding land surface process models depended on LAI data over complex terrain. This study evaluated the performance of MODIS and GLASS LAI products using the intercomparison and direct validation methods over southwestern China. The spatio-temporal consistencies, such as the spatial distributions of LAI products and their statistical relationship as a function of topographic indices, time, and vegetation types, respectively, were investigated through intercomparison between MODIS and GLASS products during the period 2011-2013. The accuracies and change ranges of these two products were evaluated against available LAI reference maps over 10 sampling regions which standed for typical vegetation types and topographic gradients in southwestern China. The results show that GLASS LAI exhibits higher percentage of good quality data (i.e. successful retrievals) and smoother temporal profiles than MODIS LAI. The percentage of successful retrievals for MODIS and GLASS is vulnerable to topographic indices, especially to relief amplitude. Besides, the two products do not capture seasonal dynamics of crop, especially in spring over heterogeneously hilly regions. The yearly mean LAI differences between MODIS and GLASS are within ±0.5 for 64.70% of the total retrieval pixels over southwestern China. The spatial distribution of mean differences and temporal profiles of these two products are inclined to be dominated by vegetation types other than topographic indices. The spatial and temporal consistency of these two products is good over most area of grasses/cereal crops; however, it is poor for evergreen broadleaf forest. MODIS presents more reliable change range of LAI than GLASS through comparison with fine resolution reference maps over most of sampling regions. The accuracies of direct validation are obtained for GLASS LAI (r = 0.35, RMSE = 1.72, mean bias = -0.71) and MODIS LAI (r = 0.49, RMSE = 1.75, mean bias = -0.67). GLASS performs similarly to MODIS, but may be marginally inferior to MODIS based on our direct validation results. The validation experience demonstrates the necessity and importance of topographic consideration for LAI estimation over mountain areas. Considerable attention will be paid to the improvements of surface reflectance, retrieval algorithm and land cover types so as to enhance the quality of LAI products in topographically complex terrain.

  1. Sharing reference data and including cows in the reference population improve genomic predictions in Danish Jersey.

    PubMed

    Su, G; Ma, P; Nielsen, U S; Aamand, G P; Wiggans, G; Guldbrandtsen, B; Lund, M S

    2016-06-01

    Small reference populations limit the accuracy of genomic prediction in numerically small breeds, such like Danish Jersey. The objective of this study was to investigate two approaches to improve genomic prediction by increasing size of reference population in Danish Jersey. The first approach was to include North American Jersey bulls in Danish Jersey reference population. The second was to genotype cows and use them as reference animals. The validation of genomic prediction was carried out on bulls and cows, respectively. In validation on bulls, about 300 Danish bulls (depending on traits) born in 2005 and later were used as validation data, and the reference populations were: (1) about 1050 Danish bulls, (2) about 1050 Danish bulls and about 1150 US bulls. In validation on cows, about 3000 Danish cows from 87 young half-sib families were used as validation data, and the reference populations were: (1) about 1250 Danish bulls, (2) about 1250 Danish bulls and about 1150 US bulls, (3) about 1250 Danish bulls and about 4800 cows, (4) about 1250 Danish bulls, 1150 US bulls and 4800 Danish cows. Genomic best linear unbiased prediction model was used to predict breeding values. De-regressed proofs were used as response variables. In the validation on bulls for eight traits, the joint DK-US bull reference population led to higher reliability of genomic prediction than the DK bull reference population for six traits, but not for fertility and longevity. Averaged over the eight traits, the gain was 3 percentage points. In the validation on cows for six traits (fertility and longevity were not available), the gain from inclusion of US bull in reference population was 6.6 percentage points in average over the six traits, and the gain from inclusion of cows was 8.2 percentage points. However, the gains from cows and US bulls were not accumulative. The total gain of including both US bulls and Danish cows was 10.5 percentage points. The results indicate that sharing reference data and including cows in reference population are efficient approaches to increase reliability of genomic prediction. Therefore, genomic selection is promising for numerically small population.

  2. A land-cover map for South and Southeast Asia derived from SPOT-VEGETATION data

    USGS Publications Warehouse

    Stibig, H.-J.; Belward, A.S.; Roy, P.S.; Rosalina-Wasrin, U.; Agrawal, S.; Joshi, P.K.; ,; Beuchle, R.; Fritz, S.; Mubareka, S.; Giri, C.

    2007-01-01

    Aim  Our aim was to produce a uniform ‘regional’ land-cover map of South and Southeast Asia based on ‘sub-regional’ mapping results generated in the context of the Global Land Cover 2000 project.Location  The ‘region’ of tropical and sub-tropical South and Southeast Asia stretches from the Himalayas and the southern border of China in the north, to Sri Lanka and Indonesia in the south, and from Pakistan in the west to the islands of New Guinea in the far east.Methods  The regional land-cover map is based on sub-regional digital mapping results derived from SPOT-VEGETATION satellite data for the years 1998–2000. Image processing, digital classification and thematic mapping were performed separately for the three sub-regions of South Asia, continental Southeast Asia, and insular Southeast Asia. Landsat TM images, field data and existing national maps served as references. We used the FAO (Food and Agriculture Organization) Land Cover Classification System (LCCS) for coding the sub-regional land-cover classes and for aggregating the latter to a uniform regional legend. A validation was performed based on a systematic grid of sample points, referring to visual interpretation from high-resolution Landsat imagery. Regional land-cover area estimates were obtained and compared with FAO statistics for the categories ‘forest’ and ‘cropland’.Results  The regional map displays 26 land-cover classes. The LCCS coding provided a standardized class description, independent from local class names; it also allowed us to maintain the link to the detailed sub-regional land-cover classes. The validation of the map displayed a mapping accuracy of 72% for the dominant classes of ‘forest’ and ‘cropland’; regional area estimates for these classes correspond reasonably well to existing regional statistics.Main conclusions  The land-cover map of South and Southeast Asia provides a synoptic view of the distribution of land cover of tropical and sub-tropical Asia, and it delivers reasonable thematic detail and quantitative estimates of the main land-cover proportions. The map may therefore serve for regional stratification or modelling of vegetation cover, but could also support the implementation of forest policies, watershed management or conservation strategies at regional scales.

  3. Fusion of Three-Dimensional Echocardiographic Regional Myocardial Strain with Cardiac Computed Tomography for Noninvasive Evaluation of the Hemodynamic Impact of Coronary Stenosis in Patients with Chest Pain.

    PubMed

    Mor-Avi, Victor; Patel, Mita B; Maffessanti, Francesco; Singh, Amita; Medvedofsky, Diego; Zaidi, S Javed; Mediratta, Anuj; Narang, Akhil; Nazir, Noreen; Kachenoura, Nadjia; Lang, Roberto M; Patel, Amit R

    2018-06-01

    Combined evaluation of coronary stenosis and the extent of ischemia is essential in patients with chest pain. Intermediate-grade stenosis on computed tomographic coronary angiography (CTCA) frequently triggers downstream nuclear stress testing. Alternative approaches without stress and/or radiation may have important implications. Myocardial strain measured from echocardiographic images can be used to detect subclinical dysfunction. The authors recently tested the feasibility of fusion of three-dimensional (3D) echocardiography-derived regional resting longitudinal strain with coronary arteries from CTCA to determine the hemodynamic significance of stenosis. The aim of the present study was to validate this approach against accepted reference techniques. Seventy-eight patients with chest pain referred for CTCA who also underwent 3D echocardiography and regadenoson stress computed tomography were prospectively studied. Left ventricular longitudinal strain data (TomTec) were used to generate fused 3D displays and detect resting strain abnormalities (RSAs) in each coronary territory. Computed tomographic coronary angiographic images were interpreted for the presence and severity of stenosis. Fused 3D displays of subendocardial x-ray attenuation were created to detect stress perfusion defects (SPDs). In patients with stenosis >25% in at least one artery, fractional flow reserve was quantified (HeartFlow). RSA as a marker of significant stenosis was validated against two different combined references: stenosis >50% on CTCA and SPDs seen in the same territory (reference standard A) and fractional flow reserve < 0.80 and SPDs in the same territory (reference standard B). Of the 99 arteries with no stenosis >50% and no SPDs, considered as normal, 19 (19%) had RSAs. Conversely, with stenosis >50% and SPDs, RSAs were considerably more frequent (17 of 24 [71%]). The sensitivity, specificity, and accuracy of RSA were 0.71, 0.81, and 0.79, respectively, against reference standard A and 0.83, 0.81, and 0.82 against reference standard B. Fusion of CTCA and 3D echocardiography-derived resting myocardial strain provides combined displays, which may be useful in determination of the hemodynamic or functional impact of coronary abnormalities, without additional ionizing radiation or stress testing. Copyright © 2018 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  4. CLSI-derived hematology and biochemistry reference intervals for healthy adults in eastern and southern Africa.

    PubMed

    Karita, Etienne; Ketter, Nzeera; Price, Matt A; Kayitenkore, Kayitesi; Kaleebu, Pontiano; Nanvubya, Annet; Anzala, Omu; Jaoko, Walter; Mutua, Gaudensia; Ruzagira, Eugene; Mulenga, Joseph; Sanders, Eduard J; Mwangome, Mary; Allen, Susan; Bwanika, Agnes; Bahemuka, Ubaldo; Awuondo, Ken; Omosa, Gloria; Farah, Bashir; Amornkul, Pauli; Birungi, Josephine; Yates, Sarah; Stoll-Johnson, Lisa; Gilmour, Jill; Stevens, Gwynn; Shutes, Erin; Manigart, Olivier; Hughes, Peter; Dally, Len; Scott, Janet; Stevens, Wendy; Fast, Pat; Kamali, Anatoli

    2009-01-01

    Clinical laboratory reference intervals have not been established in many African countries, and non-local intervals are commonly used in clinical trials to screen and monitor adverse events (AEs) among African participants. Using laboratory reference intervals derived from other populations excludes potential trial volunteers in Africa and makes AE assessment challenging. The objective of this study was to establish clinical laboratory reference intervals for 25 hematology, immunology and biochemistry values among healthy African adults typical of those who might join a clinical trial. Equal proportions of men and women were invited to participate in a cross sectional study at seven clinical centers (Kigali, Rwanda; Masaka and Entebbe, Uganda; two in Nairobi and one in Kilifi, Kenya; and Lusaka, Zambia). All laboratories used hematology, immunology and biochemistry analyzers validated by an independent clinical laboratory. Clinical and Laboratory Standards Institute guidelines were followed to create study consensus intervals. For comparison, AE grading criteria published by the U.S. National Institute of Allergy and Infectious Diseases Division of AIDS (DAIDS) and other U.S. reference intervals were used. 2,990 potential volunteers were screened, and 2,105 (1,083 men and 1,022 women) were included in the analysis. While some significant gender and regional differences were observed, creating consensus African study intervals from the complete data was possible for 18 of the 25 analytes. Compared to reference intervals from the U.S., we found lower hematocrit and hemoglobin levels, particularly among women, lower white blood cell and neutrophil counts, and lower amylase. Both genders had elevated eosinophil counts, immunoglobulin G, total and direct bilirubin, lactate dehydrogenase and creatine phosphokinase, the latter being more pronounced among women. When graded against U.S. -derived DAIDS AE grading criteria, we observed 774 (35.3%) volunteers with grade one or higher results; 314 (14.9%) had elevated total bilirubin, and 201 (9.6%) had low neutrophil counts. These otherwise healthy volunteers would be excluded or would require special exemption to participate in many clinical trials. To accelerate clinical trials in Africa, and to improve their scientific validity, locally appropriate reference ranges should be used. This study provides ranges that will inform inclusion criteria and evaluation of adverse events for studies in these regions of Africa.

  5. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  6. How is the surface Atlantic water inflow through the Gibraltar Strait forecasted? A lagrangian validation of operational oceanographic services in the Alboran Sea and the Western Mediterranean

    NASA Astrophysics Data System (ADS)

    Sotillo, M. G.; Amo-Baladrón, A.; Padorno, E.; Garcia-Ladona, E.; Orfila, A.; Rodríguez-Rubio, P.; Conti, D.; Madrid, J. A. Jiménez; de los Santos, F. J.; Fanjul, E. Alvarez

    2016-11-01

    An exhaustive validation of some of the operational ocean forecast products available in the Gibraltar Strait and the Alboran Sea is here presented. The skill of two ocean model solutions (derived from the Eulerian ocean forecast systems, such as the regional CMEMS IBI and the high resolution PdE SAMPA) in reproducing the complex surface dynamics in the above areas is evaluated. To this aim, in-situ measurements from the MEDESS-GIB drifter buoy database (comprising the Lagrangian positions, derived velocities and SST values) are used as the observational reference and the temporal coverage for the validation is 3 months (September to December 2014). Two metrics, a Lagrangian separation distance and a skill score, have been applied to evaluate the performance of the modelling systems in reproducing the observed trajectories. Furthermore, the SST validation with in-situ data is carried out by means of validating the model solutions with L3 satellite SST products. The Copernicus regional IBI products are evaluated in an extended domain, beyond the Alboran Sea, and covering western Mediterranean waters. This analysis reveals some strengths of the presented regional solution (i.e. realistic values of the Atlantic Jet in the Strait of Gibraltar area, realistic simulation of the Algerian Current). However, some shortcomings are also identified, with the major one being related to the simulated geographical position and intensity of the Alboran Gyres, particularly the western one. This performance limitation affects the IBI-modelled surface circulation in the entire Alboran Sea. On the other hand, the SAMPA system shows a more accurate model performance and it realistically reproduces the observed surface circulation in the area. The results reflect the effectiveness of the dynamical downscaling performed through the SAMPA system with respect to the regional IBI solution (in which SAMPA is nested), providing an objective measure of the potential added values introduced by the SAMPA downscaling solution in the Alboran Sea.

  7. Sea Temperature Fiducial Reference Measurements for the Validation and Data Gap Bridging of Satellite SST Data Products

    NASA Astrophysics Data System (ADS)

    Wimmer, Werenfrid

    2016-08-01

    The Infrared Sea surface temperature Autonomous Radiometer (ISAR) was developed to provide reference data for the validation of satellite Sea Surface Temperature at the Skin interface (SSTskin) temperature data products, particularly the Advanced Along Track Scanning Radiometer (AATSR). Since March 2004 ISAR instruments have been deployed nearly continuously on ferries crossing the English Channel and the Bay of Biscay, between Portsmouth (UK) and Bilbao/Santander (Spain). The resulting twelve years of ISAR data, including an individual uncertainty estimate for each SST record, are calibrated with traceability to national standards (National Institute of Standards and Technology, USA (NIST) and National Physical Laboratory, Teddigton, UK (NPL), Fiducial Reference Measurements for satellite derived surface temperature product validation (FRM4STS)). They provide a unique independent in situ reference dataset against which to validate satellite derived products. We present results of the AATSR validation, and show the use of ISAR fiducial reference measurements as a common traceable validation data source for both AATSR and Sea and Land Surface Temperature Radiometer (SLSTR). ISAR data were also used to review performance of the Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) Sea Surface Temperature (SST) analysis before and after the demise of ESA Environmental Satellite (Envisat) when AATSR inputs ceased This demonstrates use of the ISAR reference data set for validating the SST climatologies that will bridge the data gap between AATSR and SLSTR.

  8. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  9. Balloon Borne Soundings of Water Vapor, Ozone and Temperature in the Upper Tropospheric and Lower Stratosphere as Part of the Second SAGE III Ozone Loss and Validation Experiment (SOLVE-2)

    NASA Technical Reports Server (NTRS)

    Voemel, Holger

    2004-01-01

    The main goal of our work was to provide in situ water vapor and ozone profiles in the upper troposphere and lower stratosphere as reference measurements for the validation of SAGE III water vapor and ozone retrievals. We used the NOAA/CMDL frost point hygrometer and ECC ozone sondes on small research balloons to provide continuous profiles between the surface and the mid stratosphere. The NOAA/CMDL frost point hygrometer is currently the only lightweight balloon borne instrument capable of measuring water vapor between the lower troposphere and middle stratosphere. The validation measurements were based in the arctic region of Scandinavia for northern hemisphere observations and in New Zealand for southern hemisphere observations and timed to coincide with overpasses of the SAGE III instrument. In addition to SAGE III validation we also tried to coordinate launches with other instruments and studied dehydration and transport processes in the Arctic stratospheric vortex.

  10. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    NASA Astrophysics Data System (ADS)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and detect potential model biases with respect to each reference data set. The RCMs are then ranked based on various statistical measures for each indicator and a score matrix is derived to select a subset of RCMs. We show the impact and importance of the reference data set with respect to the resulting climate change signal on the catchment scale.

  11. Plans and progress for building a Great Lakes fauna DNA ...

    EPA Pesticide Factsheets

    DNA reference libraries provide researchers with an important tool for assessing regional biodiversity by allowing unknown genetic sequences to be assigned identities, while also providing a means for taxonomists to validate identifications. Expanding the representation of Great Lakes species in such reference libraries is an explicit component of research at EPA’s Mid-Continent Ecology Division. Our DNA reference library building efforts began in 2012 with the goal of providing barcodes for at least 5 specimens of each native and nonindigenous fish and aquatic invertebrate species currently present in the Great Lakes. The approach is to pull taxonomically validated specimen for sequencing from EPA led sampling efforts of adult/juvenile fish, larval fish, benthic macroinvertebrates, and zooplankton; while also soliciting aid from state and federal agencies for tissue from “shopping list” organisms. The barcodes we generate are made available through the publicly accessible BOLD (Barcode of Life) database, and help inform a planned Great Lakes biodiversity inventory. To date, our submissions to BOLD are limited to fishes; of the 88 fish species listed as being present within Lake Superior, roughly half were successfully barcoded, while only 22 species met the desired quota of 5 barcoded specimens per species. As we continue to generate genomic information from our collections and the taxonomic representations become more complete, we will continue to

  12. Suitability of [18F]altanserin and PET to determine 5-HT2A receptor availability in the rat brain: in vivo and in vitro validation of invasive and non-invasive kinetic models.

    PubMed

    Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas

    2013-08-01

    While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.

  13. Film Excerpts Shown to Specifically Elicit Various Affects Lead to Overlapping Activation Foci in a Large Set of Symmetrical Brain Regions in Males

    PubMed Central

    Karama, Sherif; Armony, Jorge; Beauregard, Mario

    2011-01-01

    While the limbic system theory continues to be part of common scientific parlance, its validity has been questioned on multiple grounds. Nonetheless, the issue of whether or not there exists a set of brain areas preferentially dedicated to emotional processing remains central within affective neuroscience. Recently, a widespread neural reference space for emotion which includes limbic as well as other regions was characterized in a large meta-analysis. As methodologically heterogeneous studies go into such meta-analyses, showing in an individual study in which all parameters are kept constant, the involvement of overlapping areas for various emotion conditions in keeping with the neural reference space for emotion, would serve as valuable confirmatory evidence. Here, using fMRI, 20 young adult men were scanned while viewing validated neutral and effective emotion-eliciting short film excerpts shown to quickly and specifically elicit disgust, amusement, or sexual arousal. Each emotion-specific run included, in random order, multiple neutral and emotion condition blocks. A stringent conjunction analysis revealed a large overlap across emotion conditions that fit remarkably well with the neural reference space for emotion. This overlap included symmetrical bilateral activation of the medial prefrontal cortex, the anterior cingulate, the temporo-occipital junction, the basal ganglia, the brainstem, the amygdala, the hippocampus, the thalamus, the subthalamic nucleus, the posterior hypothalamus, the cerebellum, as well as the frontal operculum extending towards the anterior insula. This study clearly confirms for the visual modality, that processing emotional stimuli leads to widespread increases in activation that cluster within relatively confined areas, regardless of valence. PMID:21818311

  14. Automated Cervical Screening and Triage, Based on HPV Testing and Computer-Interpreted Cytology.

    PubMed

    Yu, Kai; Hyun, Noorie; Fetterman, Barbara; Lorey, Thomas; Raine-Bennett, Tina R; Zhang, Han; Stamps, Robin E; Poitras, Nancy E; Wheeler, William; Befano, Brian; Gage, Julia C; Castle, Philip E; Wentzensen, Nicolas; Schiffman, Mark

    2018-04-11

    State-of-the-art cervical cancer prevention includes human papillomavirus (HPV) vaccination among adolescents and screening/treatment of cervical precancer (CIN3/AIS and, less strictly, CIN2) among adults. HPV testing provides sensitive detection of precancer but, to reduce overtreatment, secondary "triage" is needed to predict women at highest risk. Those with the highest-risk HPV types or abnormal cytology are commonly referred to colposcopy; however, expert cytology services are critically lacking in many regions. To permit completely automatable cervical screening/triage, we designed and validated a novel triage method, a cytologic risk score algorithm based on computer-scanned liquid-based slide features (FocalPoint, BD, Burlington, NC). We compared it with abnormal cytology in predicting precancer among 1839 women testing HPV positive (HC2, Qiagen, Germantown, MD) in 2010 at Kaiser Permanente Northern California (KPNC). Precancer outcomes were ascertained by record linkage. As additional validation, we compared the algorithm prospectively with cytology results among 243 807 women screened at KPNC (2016-2017). All statistical tests were two-sided. Among HPV-positive women, the algorithm matched the triage performance of abnormal cytology. Combined with HPV16/18/45 typing (Onclarity, BD, Sparks, MD), the automatable strategy referred 91.7% of HPV-positive CIN3/AIS cases to immediate colposcopy while deferring 38.4% of all HPV-positive women to one-year retesting (compared with 89.1% and 37.4%, respectively, for typing and cytology triage). In the 2016-2017 validation, the predicted risk scores strongly correlated with cytology (P < .001). High-quality cervical screening and triage performance is achievable using this completely automated approach. Automated technology could permit extension of high-quality cervical screening/triage coverage to currently underserved regions.

  15. Identification and validation of reference genes for qRT-PCR studies of the obligate aphid pathogenic fungus Pandora neoaphidis during different developmental stages.

    PubMed

    Zhang, Shutao; Chen, Chun; Xie, Tingna; Ye, Sudan

    2017-01-01

    The selection of stable reference genes is a critical step for the accurate quantification of gene expression. To identify and validate the reference genes in Pandora neoaphidis-an obligate aphid pathogenic fungus-the expression of 13classical candidate reference genes were evaluated by quantitative real-time reverse transcriptase polymerase chain reaction(qPCR) at four developmental stages (conidia, conidia with germ tubes, short hyphae and elongated hyphae). Four statistical algorithms, including geNorm, NormFinder, BestKeeper and Delta Ct method were used to rank putative reference genes according to their expression stability and indicate the best reference gene or combination of reference genes for accurate normalization. The analysis of comprehensive ranking revealed that ACT1and 18Swas the most stably expressed genes throughout the developmental stages. To further validate the suitability of the reference genes identified in this study, the expression of cell division control protein 25 (CDC25) and Chitinase 1(CHI1) genes were used to further confirm the validated candidate reference genes. Our study presented the first systematic study of reference gene(s) selection for P. neoaphidis study and provided guidelines to obtain more accurate qPCR results for future developmental efforts.

  16. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  17. Inter‐station intensity standardization for whole‐body MR data

    PubMed Central

    Staring, Marius; Reijnierse, Monique; Lelieveldt, Boudewijn P. F.; van der Geest, Rob J.

    2016-01-01

    Purpose To develop and validate a method for performing inter‐station intensity standardization in multispectral whole‐body MR data. Methods Different approaches for mapping the intensity of each acquired image stack into the reference intensity space were developed and validated. The registration strategies included: “direct” registration to the reference station (Strategy 1), “progressive” registration to the neighboring stations without (Strategy 2), and with (Strategy 3) using information from the overlap regions of the neighboring stations. For Strategy 3, two regularized modifications were proposed and validated. All methods were tested on two multispectral whole‐body MR data sets: a multiple myeloma patients data set (48 subjects) and a whole‐body MR angiography data set (33 subjects). Results For both data sets, all strategies showed significant improvement of intensity homogeneity with respect to vast majority of the validation measures (P < 0.005). Strategy 1 exhibited the best performance, closely followed by Strategy 2. Strategy 3 and its modifications were performing worse, in majority of the cases significantly (P < 0.05). Conclusions We propose several strategies for performing inter‐station intensity standardization in multispectral whole‐body MR data. All the strategies were successfully applied to two types of whole‐body MR data, and the “direct” registration strategy was concluded to perform the best. Magn Reson Med 77:422–433, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26834001

  18. Gravastars in f (G ,T ) gravity

    NASA Astrophysics Data System (ADS)

    Shamir, M. Farasat; Ahmad, Mushtaq

    2018-05-01

    This work proposes a stellar model under Gauss-Bonnet f (G ,T ) gravity with the conjecture theorized by Mazur and Mottola, well known as the gravitational vacuum stars (gravastars). By taking into account the f (G ,T ) stellar model, the structure of the gravastar with its exclusive division of three different regions, namely, (i) the core interior region, (ii) the junction region (shell), and (iii) the exterior region, has been investigated with reference to the existence of energy density, pressure, ultrarelativistic plasma, and repulsive forces. The different physical features, like the equation of state parameter, length of the shell, entropy, and energy-thickness relation of the gravastar shell model, have been discussed. Also, some other physically valid aspects have been presented with the connection to nonsingular and event-horizon-free gravastar solutions, which in contrast to a black hole solution, might be stable without containing any information paradox.

  19. A Comparative Reference Study for the Validation of HLA-Matching Algorithms in the Search for Allogeneic Hematopoietic Stem Cell Donors and Cord Blood Units

    DTIC Science & Technology

    2016-08-15

    HLA ISSN 2059-2302 A comparative reference study for the validation of HLA-matching algorithms in the search for allogeneic hematopoietic stem cell...from different inter- national donor registries by challenging them with simulated input data and subse- quently comparing the output. This experiment...original work is properly cited, the use is non-commercial and no modifications or adaptations are made. Comparative reference validation of HLA

  20. Five-level emergency triage systems: variation in assessment of validity.

    PubMed

    Kuriyama, Akira; Urushidani, Seigo; Nakayama, Takeo

    2017-11-01

    Triage systems are scales developed to rate the degree of urgency among patients who arrive at EDs. A number of different scales are in use; however, the way in which they have been validated is inconsistent. Also, it is difficult to define a surrogate that accurately predicts urgency. This systematic review described reference standards and measures used in previous validation studies of five-level triage systems. We searched PubMed, EMBASE and CINAHL to identify studies that had assessed the validity of five-level triage systems and described the reference standards and measures applied in these studies. Studies were divided into those using criterion validity (reference standards developed by expert panels or triage systems already in use) and those using construct validity (prognosis, costs and resource use). A total of 57 studies examined criterion and construct validity of 14 five-level triage systems. Criterion validity was examined by evaluating (1) agreement between the assigned degree of urgency with objective standard criteria (12 studies), (2) overtriage and undertriage (9 studies) and (3) sensitivity and specificity of triage systems (7 studies). Construct validity was examined by looking at (4) the associations between the assigned degree of urgency and measures gauged in EDs (48 studies) and (5) the associations between the assigned degree of urgency and measures gauged after hospitalisation (13 studies). Particularly, among 46 validation studies of the most commonly used triages (Canadian Triage and Acuity Scale, Emergency Severity Index and Manchester Triage System), 13 and 39 studies examined criterion and construct validity, respectively. Previous studies applied various reference standards and measures to validate five-level triage systems. They either created their own reference standard or used a combination of severity/resource measures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  2. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Verification of GCM-generated regional seasonal precipitation for current climate and of statistical downscaling estimates under changing climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busuioc, A.; Storch, H. von; Schnur, R.

    Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less

  4. Validation of a recombinant protein indirect ELISA for the detection of specific antibodies against Theileria uilenbergi and Theileria luwenshuni in small ruminants.

    PubMed

    Liu, Zhijie; Li, Youquan; Salih, Dia Eldin A; Luo, Jianxun; Ahmed, Jabbar S; Seitzer, Ulrike; Yin, Hong

    2014-08-29

    An enzyme-linked immunosorbent assay (ELISA) based on a recombinant Theileria uilenbergi immunodominant protein (rTuIP) was validated for detection of antibodies in 188 positive and 198 negative reference serum samples, respectively. The cut-off value was determined at 32.7% with 95% and 90% accuracy levels by two-graphic receiver-operating characteristic (TG-ROC). The equal diagnostic sensitivity (Se) and specificity (Sp) were calculated to be 98.4%. Further validation of the repeatability with positive and negative reference samples indicated the reliable performance of the assay. Monitoring the antibody dynamics of sheep experimentally infected with Theileria luwenshuni showed the efficient detection of antibody response against the pathogen at the early infection stage and up until two months post infection. Application of this assay for detection of antibody in field sera from previous unknown Theileria endemic regions in Suizhou and Guiyang showed 17.8% and 11.6% seroprevalence, respectively, and presence of the pathogen was confirmed by identification of the 18S rRNA gene in the corresponding blood of the seropositive animals. These data support that the rTuIP ELISA could be a useful tool to study the epidemiology of theileriosis caused by T. uilenbergi and/or T. luwenshuni. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Context-dependent logo matching and recognition.

    PubMed

    Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto

    2013-03-01

    We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.

  6. In silico selection of expression reference genes with demonstrated stability in barley among a diverse set of tissues and cultivars

    USDA-ARS?s Scientific Manuscript database

    Premise of the study: Reference genes are selected based on the assumption of temporal and spatial expression stability and on their widespread use in model species. They are often used in new target species without validation, presumed as stable. For barley, reference gene validation is lacking, bu...

  7. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution

    PubMed Central

    Zermoglio, Paula F.; Guralnick, Robert P.; Wieczorek, John R.

    2016-01-01

    Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume) and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data. PMID:26760296

  8. The Research Diagnostic Criteria for Temporomandibular Disorders. I: overview and methodology for assessment of validity.

    PubMed

    Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O

    2010-01-01

    The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.

  9. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    PubMed

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  10. Validation of reference genes for RT-qPCR studies of gene expression in banana fruit under different experimental conditions.

    PubMed

    Chen, Lei; Zhong, Hai-ying; Kuang, Jian-fei; Li, Jian-guo; Lu, Wang-jin; Chen, Jian-ye

    2011-08-01

    Reverse transcription quantitative real-time PCR (RT-qPCR) is a sensitive technique for quantifying gene expression, but its success depends on the stability of the reference gene(s) used for data normalization. Only a few studies on validation of reference genes have been conducted in fruit trees and none in banana yet. In the present work, 20 candidate reference genes were selected, and their expression stability in 144 banana samples were evaluated and analyzed using two algorithms, geNorm and NormFinder. The samples consisted of eight sample sets collected under different experimental conditions, including various tissues, developmental stages, postharvest ripening, stresses (chilling, high temperature, and pathogen), and hormone treatments. Our results showed that different suitable reference gene(s) or combination of reference genes for normalization should be selected depending on the experimental conditions. The RPS2 and UBQ2 genes were validated as the most suitable reference genes across all tested samples. More importantly, our data further showed that the widely used reference genes, ACT and GAPDH, were not the most suitable reference genes in many banana sample sets. In addition, the expression of MaEBF1, a gene of interest that plays an important role in regulating fruit ripening, under different experimental conditions was used to further confirm the validated reference genes. Taken together, our results provide guidelines for reference gene(s) selection under different experimental conditions and a foundation for more accurate and widespread use of RT-qPCR in banana.

  11. Mapping by sequencing in cotton (Gossypium hirsutum) line MD52ne identified candidate genes for fiber strength and its related quality attributes.

    PubMed

    Islam, Md S; Zeng, Linghe; Thyssen, Gregory N; Delhom, Christopher D; Kim, Hee Jin; Li, Ping; Fang, David D

    2016-06-01

    Three QTL regions controlling three fiber quality traits were validated and further fine-mapped with 27 new single nucleotide polymorphism (SNP) markers. Transcriptome analysis suggests that receptor-like kinases found within the validated QTLs are potential candidate genes responsible for superior fiber strength in cotton line MD52ne. Fiber strength, length, maturity and fineness determine the market value of cotton fibers and the quality of spun yarn. Cotton fiber strength has been recognized as a critical quality attribute in the modern textile industry. Fine mapping along with quantitative trait loci (QTL) validation and candidate gene prediction can uncover the genetic and molecular basis of fiber quality traits. Four previously-identified QTLs (qFBS-c3, qSFI-c14, qUHML-c14 and qUHML-c24) related to fiber bundle strength, short fiber index and fiber length, respectively, were validated using an F3 population that originated from a cross of MD90ne × MD52ne. A group of 27 new SNP markers generated from mapping-by-sequencing (MBS) were placed in QTL regions to improve and validate earlier maps. Our refined QTL regions spanned 4.4, 1.8 and 3.7 Mb of physical distance in the Gossypium raimondii reference genome. We performed RNA sequencing (RNA-seq) of 15 and 20 days post-anthesis fiber cells from MD52ne and MD90ne and aligned reads to the G. raimondii genome. The QTL regions contained 21 significantly differentially expressed genes (DEGs) between the two near-isogenic parental lines. SNPs that result in non-synonymous substitutions to amino acid sequences of annotated genes were identified within these DEGs, and mapped. Taken together, transcriptome and amino acid mutation analysis indicate that receptor-like kinase pathway genes are likely candidates for superior fiber strength and length in MD52ne. MBS along with RNA-seq demonstrated a powerful strategy to elucidate candidate genes for the QTLs that control complex traits in a complex genome like tetraploid upland cotton.

  12. Examining the validity of the Homework Performance Questionnaire: Multi-informant assessment in elementary and middle school.

    PubMed

    Power, Thomas J; Watkins, Marley W; Mautone, Jennifer A; Walcott, Christy M; Coutts, Michael J; Sheridan, Susan M

    2015-06-01

    Methods for measuring homework performance have been limited primarily to parent reports of homework deficits. The Homework Performance Questionnaire (HPQ) was developed to assess the homework functioning of students in Grades 1 to 8 from the perspective of both teachers and parents. The purpose of this study was to examine the factorial validity of teacher and parent versions of this scale, and to evaluate gender and grade-level differences in factor scores. The HPQ was administered in 4 states from varying regions of the United States. The validation sample consisted of students (n = 511) for whom both parent and teacher ratings were obtained (52% female, mean of 9.5 years of age, 79% non-Hispanic, and 78% White). The cross-validation sample included 1,450 parent ratings and 166 teacher ratings with similar demographic characteristics. The results of confirmatory factor analyses demonstrated that the best-fitting model for teachers was a bifactor solution including a general factor and 2 orthogonal factors, referring to student self-regulation and competence. The best-fitting model for parents was also a bifactor solution, including a general factor and 3 orthogonal factors, referring to student self-regulation, student competence, and teacher support of homework. Gender differences were identified for the general and self-regulation factors of both versions. Overall, the findings provide strong support for the HPQ as a multi-informant, multidimensional measure of homework performance that has utility for the assessment of elementary and middle school students. (c) 2015 APA, all rights reserved).

  13. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.

  14. Linearization correction of /sup 99m/Tc-labeled hexamethyl-propylene amine oxime (HM-PAO) image in terms of regional CBF distribution: comparison to C VO2 inhalation steady-state method measured by positron emission tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inugami, A.; Kanno, I.; Uemura, K.

    1988-12-01

    The radioisotope distribution following intravenous injection of 99mTc-labeled hexamethylpropyleneamine oxime (HM-PAO) in the brain was measured by single photon emission computed tomography (SPECT) and corrected for the nonlinearity caused by differences in net extraction. The linearization correction was based on a three compartment model, and it required a region of reference to normalize the SPECT image in terms of regional cerebral blood flow distribution. Two different regions of reference, the cerebellum and the whole brain, were tested. The uncorrected and corrected HM-PAO images were compared with cerebral blood flow (CBF) image measured by the C VO2 inhalation steady state methodmore » and positron emission tomography (PET). The relationship between uncorrected HM-PAO and PET-CBF showed a correlation coefficient of 0.85 but tended to saturate at high CBF values, whereas it was improved to 0.93 after the linearization correction. The whole-brain normalization worked just as well as normalization using the cerebellum. This study constitutes a validation of the linearization correction and it suggests that after linearization the HM-PAO image may be scaled to absolute CBF by employing a global hemispheric CBF value as measured by the nontomographic TTXe clearance method.« less

  15. Validation of an enzyme-linked immunosorbent assay for the quantification of human IgG directed against the repeat region of the circumsporozoite protein of the parasite Plasmodium falciparum

    PubMed Central

    2012-01-01

    Background Several pre-erythrocytic malaria vaccines based on the circumsporozoite protein (CSP) antigen of Plasmodium falciparum are in clinical development. Vaccine immunogenicity is commonly evaluated by the determination of anti-CSP antibody levels using IgG-based assays, but no standard assay is available to allow comparison of the different vaccines. Methods The validation of an anti-CSP repeat region enzyme-linked immunosorbent assay (ELISA) is described. This assay is based on the binding of serum antibodies to R32LR, a recombinant protein composed of the repeat region of P. falciparum CSP. In addition to the original recombinant R32LR, an easy to purify recombinant His-tagged R32LR protein has been constructed to be used as solid phase antigen in the assay. Also, hybridoma cell lines have been generated producing human anti-R32LR monoclonal antibodies to be used as a potential inexhaustible source of anti-CSP repeats standard, instead of a reference serum. Results The anti-CSP repeats ELISA was shown to be robust, specific and linear within the analytical range, and adequately fulfilled all validation criteria as defined in the ICH guidelines. Furthermore, the coefficient of variation for repeatability and intermediate precision did not exceed 23%. Non-interference was demonstrated for R32LR-binding sera, and the assay was shown to be stable over time. Conclusions This ELISA, specific for antibodies directed against the CSP repeat region, can be used as a standard assay for the determination of humoral immunogenicity in the development of any CSP-based P. falciparum malaria vaccine. PMID:23173602

  16. Testing the validity of the phenomenological gravitational waveform models for nonspinning binary black hole searches at low masses

    NASA Astrophysics Data System (ADS)

    Cho, Hee-Suk

    2015-11-01

    The phenomenological gravitational waveform models, which we refer to as PhenomA, PhenomB, and PhenomC, generate full inspiral, merger, and ringdown (IMR) waveforms of coalescing binary back holes (BBHs). These models are defined in the Fourier domain, thus can be used for fast matched filtering in the gravitational wave search. PhenomA has been developed for nonspinning BBH waveforms, while PhenomB and PhenomC were designed to model the waveforms of BBH systems with nonprecessing (aligned) spins, but can also be used for nonspinning systems. In this work, we study the validity of the phenomenological models for nonspinning BBH searches at low masses, {m}{1,2}≥slant 4{M}⊙ and {m}1+{m}2\\equiv M≤slant 30{M}⊙ , with Advanced LIGO. As our complete signal waveform model, we adopt EOBNRv2, which is a time-domain IMR waveform model. To investigate the search efficiency of the phenomenological template models, we calculate fitting factors (FFs) by exploring overlap surfaces. We find that only PhenomC is valid to obtain FFs better than 0.97 in the mass range of M\\lt 15{M}⊙ . Above 15{M}⊙ , PhenomA is most efficient in symmetric mass region, PhenomB is most efficient in highly asymmetric mass region, and PhenomC is most efficient in the intermediate region. Specifically, we propose an effective phenomenological template family that can be constructed by employing the phenomenological models in four subregions individually. We find that FFs of the effective templates are better than 0.97 in our entire mass region and mostly greater than 0.99.

  17. Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine

    NASA Astrophysics Data System (ADS)

    Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg

    2015-03-01

    Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.

  18. UniPrime2: a web service providing easier Universal Primer design.

    PubMed

    Boutros, Robin; Stokes, Nicola; Bekaert, Michaël; Teeling, Emma C

    2009-07-01

    The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.

  19. Mapping moderate-scale land-cover over very large geographic areas within a collaborative framework: A case study of the Southwest Regional Gap Analysis Project (SWReGAP)

    USGS Publications Warehouse

    Lowry, J.; Ramsey, R.D.; Thomas, K.; Schrupp, D.; Sajwaj, T.; Kirby, J.; Waller, E.; Schrader, S.; Falzarano, S.; Langs, L.; Manis, G.; Wallace, C.; Schulz, K.; Comer, P.; Pohs, K.; Rieth, W.; Velasquez, C.; Wolk, B.; Kepner, W.; Boykin, K.; O'Brien, L.; Bradford, D.; Thompson, B.; Prior-Magee, J.

    2007-01-01

    Land-cover mapping efforts within the USGS Gap Analysis Program have traditionally been state-centered; each state having the responsibility of implementing a project design for the geographic area within their state boundaries. The Southwest Regional Gap Analysis Project (SWReGAP) was the first formal GAP project designed at a regional, multi-state scale. The project area comprises the southwestern states of Arizona, Colorado, Nevada, New Mexico, and Utah. The land-cover map/dataset was generated using regionally consistent geospatial data (Landsat ETM+ imagery (1999-2001) and DEM derivatives), similar field data collection protocols, a standardized land-cover legend, and a common modeling approach (decision tree classifier). Partitioning of mapping responsibilities amongst the five collaborating states was organized around ecoregion-based "mapping zones". Over the course of 21/2 field seasons approximately 93,000 reference samples were collected directly, or obtained from other contemporary projects, for the land-cover modeling effort. The final map was made public in 2004 and contains 125 land-cover classes. An internal validation of 85 of the classes, representing 91% of the land area was performed. Agreement between withheld samples and the validated dataset was 61% (KHAT = .60, n = 17,030). This paper presents an overview of the methodologies used to create the regional land-cover dataset and highlights issues associated with large-area mapping within a coordinated, multi-institutional management framework. ?? 2006 Elsevier Inc. All rights reserved.

  20. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  1. Gene Expression Profile Analysis is Directly Affected by the Selected Reference Gene: The Case of Leaf-Cutting Atta Sexdens

    PubMed Central

    Máximo, Wesley P. F.; Zanetti, Ronald; Paiva, Luciano V.

    2018-01-01

    Although several ant species are important targets for the development of molecular control strategies, only a few studies focus on identifying and validating reference genes for quantitative reverse transcription polymerase chain reaction (RT-qPCR) data normalization. We provide here an extensive study to identify and validate suitable reference genes for gene expression analysis in the ant Atta sexdens, a threatening agricultural pest in South America. The optimal number of reference genes varies according to each sample and the result generated by RefFinder differed about which is the most suitable reference gene. Results suggest that the RPS16, NADH and SDHB genes were the best reference genes in the sample pool according to stability values. The SNF7 gene expression pattern was stable in all evaluated sample set. In contrast, when using less stable reference genes for normalization a large variability in SNF7 gene expression was recorded. There is no universal reference gene suitable for all conditions under analysis, since these genes can also participate in different cellular functions, thus requiring a systematic validation of possible reference genes for each specific condition. The choice of reference genes on SNF7 gene normalization confirmed that unstable reference genes might drastically change the expression profile analysis of target candidate genes. PMID:29419794

  2. Development and inter-laboratory validation study of an improved new real-time PCR assay with internal control for detection and laboratory diagnosis of African swine fever virus.

    PubMed

    Tignon, Marylène; Gallardo, Carmina; Iscaro, Carmen; Hutet, Evelyne; Van der Stede, Yves; Kolbasov, Denis; De Mia, Gian Mario; Le Potier, Marie-Frédérique; Bishop, Richard P; Arias, Marisa; Koenen, Frank

    2011-12-01

    A real-time polymerase chain reaction (PCR) assay for the rapid detection of African swine fever virus (ASFV), multiplexed for simultaneous detection of swine beta-actin as an endogenous control, has been developed and validated by four National Reference Laboratories of the European Union for African swine fever (ASF) including the European Union Reference Laboratory. Primers and a TaqMan(®) probe specific for ASFV were selected from conserved regions of the p72 gene. The limit of detection of the new real-time PCR assay is 5.7-57 copies of the ASFV genome. High accuracy, reproducibility and robustness of the PCR assay (CV ranging from 0.7 to 5.4%) were demonstrated both within and between laboratories using different real-time PCR equipments. The specificity of virus detection was validated using a panel of 44 isolates collected over many years in various geographical locations in Europe, Africa and America, including recent isolates from the Caucasus region, Sardinia, East and West Africa. Compared to the OIE-prescribed conventional and real-time PCR assays, the sensitivity of the new assay with internal control was improved, as demonstrated by testing 281 field samples collected in recent outbreaks and surveillance areas in Europe and Africa (170 samples) together with samples obtained through experimental infections (111 samples). This is particularly evident in the early days following experimental infection and during the course of the disease in pigs sub-clinically infected with strains of low virulence (from 35 up to 70dpi). The specificity of the assay was also confirmed on 150 samples from uninfected pigs and wild boar from ASF-free areas. Measured on the total of 431 tested samples, the positive deviation of the new assay reaches 21% or 26% compared to PCR and real-time PCR methods recommended by OIE. This improved and rigorously validated real-time PCR assay with internal control will provide a rapid, sensitive and reliable molecular tool for ASFV detection in pigs in newly infected areas, control in endemic areas and surveillance in ASF-free areas. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Characterization of [11C]Lu AE92686 as a PET radioligand for phosphodiesterase 10A in the nonhuman primate brain.

    PubMed

    Yang, Kai-Chun; Stepanov, Vladimir; Amini, Nahid; Martinsson, Stefan; Takano, Akihiro; Nielsen, Jacob; Bundgaard, Christoffer; Bang-Andersen, Benny; Grimwood, Sarah; Halldin, Christer; Farde, Lars; Finnema, Sjoerd J

    2017-02-01

    [ 11 C]Lu AE92686 is a positron emission tomography (PET) radioligand that has recently been validated for examining phosphodiesterase 10A (PDE10A) in the human striatum. [ 11 C]Lu AE92686 has high affinity for PDE10A (IC 50  = 0.39 nM) and may also be suitable for examination of the substantia nigra, a region with low density of PDE10A. Here, we report characterization of regional [ 11 C]Lu AE92686 binding to PDE10A in the nonhuman primate (NHP) brain. A total of 11 PET measurements, seven baseline and four following pretreatment with unlabeled Lu AE92686 or the structurally unrelated PDE10A inhibitor MP-10, were performed in five NHPs using a high resolution research tomograph (HRRT). [ 11 C]Lu AE92686 binding was quantified using a radiometabolite-corrected arterial input function and compartmental and graphical modeling approaches. Regional time-activity curves were best described with the two-tissue compartment model (2TCM). However, the distribution volume (V T ) values for all regions were obtained by the Logan plot analysis, as reliable cerebellar V T values could not be derived by the 2TCM. For cerebellum, a proposed reference region, V T values increased by ∼30 % with increasing PET measurement duration from 63 to 123 min, while V T values in target regions remained stable. Both pretreatment drugs significantly decreased [ 11 C]Lu AE92686 binding in target regions, while no significant effect on cerebellum was observed. Binding potential (BP ND ) values, derived with the simplified reference tissue model (SRTM), were 13-17 in putamen and 3-5 in substantia nigra and correlated well to values from the Logan plot analysis. The method proposed for quantification of [ 11 C]Lu AE92686 binding in applied studies in NHP is based on 63 min PET data and SRTM with cerebellum as a reference region. The study supports that [ 11 C]Lu AE92686 can be used for PET examinations of PDE10A binding also in substantia nigra.

  4. Coupling a regional warning system to a semantic engine on online news for enhancing landslide prediction

    NASA Astrophysics Data System (ADS)

    Battistini, Alessandro; Rosi, Ascanio; Segoni, Samuele; Catani, Filippo; Casagli, Nicola

    2017-04-01

    Landslide inventories are basic data for large scale landslide modelling, e.g. they are needed to calibrate and validate rainfall thresholds, physically based models and early warning systems. The setting up of landslide inventories with traditional methods (e.g. remote sensing, field surveys and manual retrieval of data from technical reports and local newspapers) is time consuming. The objective of this work is to automatically set up a landslide inventory using a state-of-the art semantic engine based on data mining on online news (Battistini et al., 2013) and to evaluate if the automatically generated inventory can be used to validate a regional scale landslide warning system based on rainfall-thresholds. The semantic engine scanned internet news in real time in a 50 months test period. At the end of the process, an inventory of approximately 900 landslides was set up for the Tuscany region (23,000 km2, Italy). The inventory was compared with the outputs of the regional landslide early warning system based on rainfall thresholds, and a good correspondence was found: e.g. 84% of the events reported in the news is correctly identified by the model. In addition, the cases of not correspondence were forwarded to the rainfall threshold developers, which used these inputs to update some of the thresholds. On the basis of the results obtained, we conclude that automatic validation of landslide models using geolocalized landslide events feedback is possible. The source of data for validation can be obtained directly from the internet channel using an appropriate semantic engine. We also automated the validation procedure, which is based on a comparison between forecasts and reported events. We verified that our approach can be automatically used for a near real time validation of the warning system and for a semi-automatic update of the rainfall thresholds, which could lead to an improvement of the forecasting effectiveness of the warning system. In the near future, the proposed procedure could operate in continuous time and could allow for a periodic update of landslide hazard models and landslide early warning systems with minimum human intervention. References: Battistini, A., Segoni, S., Manzo, G., Catani, F., Casagli, N. (2013). Web data mining for automatic inventory of geohazards at national scale. Applied Geography, 43, 147-158.

  5. Bacterial reference genes for gene expression studies by RT-qPCR: survey and analysis.

    PubMed

    Rocha, Danilo J P; Santos, Carolina S; Pacheco, Luis G C

    2015-09-01

    The appropriate choice of reference genes is essential for accurate normalization of gene expression data obtained by the method of reverse transcription quantitative real-time PCR (RT-qPCR). In 2009, a guideline called the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) highlighted the importance of the selection and validation of more than one suitable reference gene for obtaining reliable RT-qPCR results. Herein, we searched the recent literature in order to identify the bacterial reference genes that have been most commonly validated in gene expression studies by RT-qPCR (in the first 5 years following publication of the MIQE guidelines). Through a combination of different search parameters with the text mining tool MedlineRanker, we identified 145 unique bacterial genes that were recently tested as candidate reference genes. Of these, 45 genes were experimentally validated and, in most of the cases, their expression stabilities were verified using the software tools geNorm and NormFinder. It is noteworthy that only 10 of these reference genes had been validated in two or more of the studies evaluated. An enrichment analysis using Gene Ontology classifications demonstrated that genes belonging to the functional categories of DNA Replication (GO: 0006260) and Transcription (GO: 0006351) rendered a proportionally higher number of validated reference genes. Three genes in the former functional class were also among the top five most stable genes identified through an analysis of gene expression data obtained from the Pathosystems Resource Integration Center. These results may provide a guideline for the initial selection of candidate reference genes for RT-qPCR studies in several different bacterial species.

  6. Using airborne laser scanning profiles to validate marine geoid models

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Gruno, Anti; Ellmann, Artu; Liibusk, Aive; Oja, Tõnis

    2014-05-01

    Airborne laser scanning (ALS) is a remote sensing method which utilizes LiDAR (Light Detection And Ranging) technology. The datasets collected are important sources for large range of scientific and engineering applications. Mostly the ALS is used to measure terrain surfaces for compilation of Digital Elevation Models but it can also be used in other applications. This contribution focuses on usage of ALS system for measuring sea surface heights and validating gravimetric geoid models over marine areas. This is based on the ALS ability to register echoes of LiDAR pulse from the water surface. A case study was carried out to analyse the possibilities for validating marine geoid models by using ALS profiles. A test area at the southern shores of the Gulf of Finland was selected for regional geoid validation. ALS measurements were carried out by the Estonian Land Board in spring 2013 at different altitudes and using different scan rates. The one wavelength Leica ALS50-II laser scanner on board of a small aircraft was used to determine the sea level (with respect to the GRS80 reference ellipsoid), which follows roughly the equipotential surface of the Earth's gravity field. For the validation a high-resolution (1'x2') regional gravimetric GRAV-GEOID2011 model was used. This geoid model covers the entire area of Estonia and surrounding waters of the Baltic Sea. The fit between the geoid model and GNSS/levelling data within the Estonian dry land revealed RMS of residuals ±1… ±2 cm. Note that such fitting validation cannot proceed over marine areas. Therefore, an ALS observation-based methodology was developed to evaluate the GRAV-GEOID2011 quality over marine areas. The accuracy of acquired ALS dataset were analyzed, also an optimal width of nadir-corridor containing good quality ALS data was determined. Impact of ALS scan angle range and flight altitude to obtainable vertical accuracy were investigated as well. The quality of point cloud is analysed by cross validation between overlapped flight lines and the comparison with tide gauge stations readings. The comparisons revealed that the ALS based profiles of sea level heights agree reasonably with the regional geoid model (within accuracy of the ALS data and after applying corrections due to sea level variations). Thus ALS measurements are suitable for measuring sea surface heights and validating marine geoid models.

  7. FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Benjamin Allen; Ortensi, Javier; DeHart, Mark David

    2016-09-01

    This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were comparedmore » to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.« less

  8. Automated mapping of burned areas in semi-arid ecosystems using modis time-series imagery

    NASA Astrophysics Data System (ADS)

    Hardtke, L. A.; Blanco, P. D.; del Valle, H. F.; Metternicht, G. I.; Sione, W. F.

    2015-04-01

    Understanding spatial and temporal patterns of burned areas at regional scales, provides a long-term perspective of fire processes and its effects on ecosystems and vegetation recovery patterns, and it is a key factor to design prevention and post-fire restoration plans and strategies. Standard satellite burned area and active fire products derived from the 500-m MODIS and SPOT are avail - able to this end. However, prior research caution on the use of these global-scale products for regional and sub-regional applica - tions. Consequently, we propose a novel algorithm for automated identification and mapping of burned areas at regional scale in semi-arid shrublands. The algorithm uses a set of the Normalized Burned Ratio Index products derived from MODIS time series; using a two-phased cycle, it firstly detects potentially burned pixels while keeping a low commission error (false detection of burned areas), and subsequently labels them as seed patches. Region growing image segmentation algorithms are applied to the seed patches in the second-phase, to define the perimeter of fire affected areas while decreasing omission errors (missing real burned areas). Independently-derived Landsat ETM+ burned-area reference data was used for validation purposes. The correlation between the size of burnt areas detected by the global fire products and independently-derived Landsat reference data ranged from R2 = 0.01 - 0.28, while our algorithm performed showed a stronger correlation coefficient (R2 = 0.96). Our findings confirm prior research calling for caution when using the global fire products locally or regionally.

  9. DNS/LES Simulations of Separated Flows at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2015-01-01

    Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.

  10. A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations.

    PubMed

    Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2017-03-03

    In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2-3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling.

  11. A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations

    PubMed Central

    Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2017-01-01

    In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2–3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling. PMID:28273814

  12. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  13. A concept-based interactive biomedical image retrieval approach using visualness and spatial information

    NASA Astrophysics Data System (ADS)

    Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-03-01

    This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.

  14. Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM

    NASA Astrophysics Data System (ADS)

    Liang, Zijun; Lin, Shunjiang; Liu, Mingbo

    2017-05-01

    Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.

  15. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval.

    PubMed

    Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R

    2015-10-01

    This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.

  16. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval

    PubMed Central

    Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-01-01

    Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398

  17. Size-dependent validation of MODIS MCD64A1 burned area over six vegetation types in boreal Eurasia: Large underestimation in croplands

    NASA Astrophysics Data System (ADS)

    Zhu, C.; Kobayashi, H.; Kanaya, Y.; Saito, M.

    2017-12-01

    Pollutants emitted from wildfires in boreal Eurasia can be transported to the Arctic, and their subsequent deposition could accelerate global warming. The Moderate Resolution Imaging Spectroradiometer (MODIS) MCD64A1 burned area product is used widely for global mapping of burned areas in conjunction with products such as the Global Fire Emission Database version 4, which can estimate pollutant emissions. However, uncertainties due to the "moderate resolution" (500 m) characteristic of the MODIS sensor could be introduced. Here, we present a size-dependent validation of MCD64A1 with reference to higher resolution (better than 30 m) satellite products (Landsat 7 ETM+, RapidEye, WorldView-2, and GeoEye-1) for six ecotypes over 12 regions of boreal Eurasia. We considered the 2012 boreal Eurasia burning season when severe wildfires occurred and when Arctic sea ice extent was historically low. Among the six ecotypes, we found MCD64A1 burned areas comprised only 13% of the reference products in croplands because of inadequate detection of small fires (<100 ha). Our results indicate that over all ecotypes, the actual burned area in boreal Eurasia (15,256 km2) could have been 16% greater than suggested by MCD64A1 (13,187 km2). We suggest applying correction factors of 0.5-8.2 when using emission rates based on MCD64A1 burned areas in chemistry and climate models of the studied regions. This implies the effects of wildfire emissions in boreal Eurasia on Arctic warming could be greater than currently estimated.

  18. Psychometric testing of the short version of the world health organization quality of life (WHOQOL-BREF) questionnaire among pulmonary tuberculosis patients in Taiwan

    PubMed Central

    2012-01-01

    Background Studies on the effects of tuberculosis on a patient’s quality of life (QOL) are scant. The objective of this study was to evaluate the psychometric properties of the Taiwan short version of the World Health Organization Quality of Life (WHOQOL-BREF) questionnaire using patients with tuberculosis in Taiwan and healthy referents. Methods The Taiwanese short version of the WHOQOL-BREF was administered to patients with tuberculosis undergoing treatment and healthy referents from March 2007 to July 2007. Patients with tuberculosis (n = 140) and healthy referents (n = 130), matched by age, sex, and ethnicity, agreed to an interview. All participants lived in eastern Taiwan. Reliability assessments included internal consistency, whereas validity assessments included construct validity, convergent validity, and discriminant validity. Results More than half of these patients and referents were men (70.7% and 66.2%, respectively), and their average ages were 50.1 and 47.9 years, respectively. Approximately 60% of patients and referents were aboriginal Taiwanese (60.7% and 61.1%, respectively). The proportion with low socioeconomic status was greater for these patients. The internal consistency reliability coefficients were .92 and .93 for the patients and healthy referents, respectively. Exploratory factor analysis on the healthy referents displayed a 4-domain model, which was compatible with the original WHOQOL-BREF 4-domain model. However, for the TB patient group, after deleting 3 items, both exploratory and confirmatory factor analysis revealed a 6-domain model. Conclusion Psychometric evaluation of the Taiwan short version of the WHOQOL-BREF indicates that it has adequate reliability for use in research with TB patients in Taiwan. However, the factor structure generated from this TB patient sample differed from the WHO’s original 4-factor model, which raised a validity concern to apply the Taiwan short version of the WHOQOL-BREF to Taiwanese TB patients. Future research recruiting another sample to revisit this validity issue must be conducted to determine the validity of the WHOQOL-BREF TW in patients with TB. PMID:22877305

  19. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  20. In silico search, characterization and validation of new EST-SSR markers in the genus Prunus.

    PubMed

    Sorkheh, Karim; Prudencio, Angela S; Ghebinejad, Azim; Dehkordi, Mehrana Kohei; Erogul, Deniz; Rubio, Manuel; Martínez-Gómez, Pedro

    2016-07-07

    Simple sequence repeats (SSRs) are defined as sequence repeat units between 1 and 6 bp that occur in both coding and non-coding regions abundant in eukaryotic genomes, which may affect the expression of genes. In this study, expressed sequence tags (ESTs) of eight Prunus species were analyzed for in silico mining of EST-SSRs, protein annotation, and open reading frames (ORFs), and the identification of codon repetitions. A total of 316 SSRs were identified using MISA software. Dinucleotide SSR motifs (26.31 %) were found to be the most abundant type of repeats, followed by tri- (14.58 %), tetra- (0.53 %), and penta- (0.27 %) nucleotide motifs. An attempt was made to design primer pairs for 316 identified SSRs but these were successful for only 175 SSR sequences. The positions of SSRs with respect to ORFs were detected, and annotation of sequences containing SSRs was performed to assign function to each sequence. SSRs were also characterized (in terms of position in the reference genome and associated gene) using the two available Prunus reference genomes (mei and peach). Finally, 38 SSR markers were validated across peach, almond, plum, and apricot genotypes. This validation showed a higher transferability level of EST-SSR developed in P. mume (mei) in comparison with the rest of species analyzed. Findings will aid analysis of functionally important molecular markers and facilitate the analysis of genetic diversity.

  1. Lung Reference Set A Application: LaszloTakacs - Biosystems (2010) — EDRN Public Portal

    Cancer.gov

    We would like to access the NCI lung cancer Combined Pre-Validation Reference Set A in order to further validate a lung cancer diagnostic test candidate. Our test is based on a panel of antibodies which have been tested on 4 different cohorts (see below, paragraph “Preliminary Data and Methods”). This Reference Set A, whose clinical setting is “Diagnosis of lung cancer”, will be used to validate the panel of monoclonal antibodies which have been demonstrated by extensive data analysis to provide the best discrimination between controls and Lung Cancer patient plasma samples, sensitivity and specificity values from ROC analyses are superior than 85 %.

  2. Gene expression studies of reference genes for quantitative real-time PCR: an overview in insects.

    PubMed

    Shakeel, Muhammad; Rodriguez, Alicia; Tahir, Urfa Bin; Jin, Fengliang

    2018-02-01

    Whenever gene expression is being examined, it is essential that a normalization process is carried out to eliminate non-biological variations. The use of reference genes, such as glyceraldehyde-3-phosphate dehydrogenase, actin, and ribosomal protein genes, is the usual method of choice for normalizing gene expression. Although reference genes are used to normalize target gene expression, a major problem is that the stability of these genes differs among tissues, developmental stages, species, and responses to abiotic factors. Therefore, the use and validation of multiple reference genes are required. This review discusses the reasons that why RT-qPCR has become the preferred method for validating results of gene expression profiles, the use of specific and non-specific dyes and the importance of use of primers and probes for qPCR as well as to discuss several statistical algorithms developed to help the validation of potential reference genes. The conflicts arising in the use of classical reference genes in gene normalization and their replacement with novel references are also discussed by citing the high stability and low stability of classical and novel reference genes under various biotic and abiotic experimental conditions by employing various methods applied for the reference genes amplification.

  3. Realization of a thermal cloak-concentrator using a metamaterial transformer.

    PubMed

    Liu, Ding-Peng; Chen, Po-Jung; Huang, Hsin-Haou

    2018-02-06

    By combining rotating squares with auxetic properties, we developed a metamaterial transformer capable of realizing metamaterials with tunable functionalities. We investigated the use of a metamaterial transformer-based thermal cloak-concentrator that can change from a cloak to a concentrator when the device configuration is transformed. We established that the proposed dual-functional metamaterial can either thermally protect a region (cloak) or focus heat flux in a small region (concentrator). The dual functionality was verified by finite element simulations and validated by experiments with a specimen composed of copper, epoxy, and rotating squares. This work provides an effective and efficient method for controlling the gradient of heat, in addition to providing a reference for other thermal metamaterials to possess such controllable functionalities by adapting the concept of a metamaterial transformer.

  4. Comparison of Two Predictive Models for Short-Term Mortality in Patients after Severe Traumatic Brain Injury.

    PubMed

    Kesmarky, Klara; Delhumeau, Cecile; Zenobi, Marie; Walder, Bernhard

    2017-07-15

    The Glasgow Coma Scale (GCS) and the Abbreviated Injury Score of the head region (HAIS) are validated prognostic factors in traumatic brain injury (TBI). The aim of this study was to compare the prognostic performance of an alternative predictive model including motor GCS, pupillary reactivity, age, HAIS, and presence of multi-trauma for short-term mortality with a reference predictive model including motor GCS, pupil reaction, and age (IMPACT core model). A secondary analysis of a prospective epidemiological cohort study in Switzerland including patients after severe TBI (HAIS >3) with the outcome death at 14 days was performed. Performance of prediction, accuracy of discrimination (area under the receiver operating characteristic curve [AUROC]), calibration, and validity of the two predictive models were investigated. The cohort included 808 patients (median age, 56; interquartile range, 33-71), median GCS at hospital admission 3 (3-14), abnormal pupil reaction 29%, with a death rate of 29.7% at 14 days. The alternative predictive model had a higher accuracy of discrimination to predict death at 14 days than the reference predictive model (AUROC 0.852, 95% confidence interval [CI] 0.824-0.880 vs. AUROC 0.826, 95% CI 0.795-0.857; p < 0.0001). The alternative predictive model had an equivalent calibration, compared with the reference predictive model Hosmer-Lemeshow p values (Chi2 8.52, Hosmer-Lemeshow p = 0.345 vs. Chi2 8.66, Hosmer-Lemeshow p = 0.372). The optimism-corrected value of AUROC for the alternative predictive model was 0.845. After severe TBI, a higher performance of prediction for short-term mortality was observed with the alternative predictive model, compared with the reference predictive model.

  5. Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences

    PubMed Central

    2018-01-01

    Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424

  6. Current Practices of Measuring and Reference Range Reporting of Free and Total Testosterone in the United States.

    PubMed

    Le, Margaret; Flores, David; May, Danica; Gourley, Eric; Nangia, Ajay K

    2016-05-01

    The evaluation and management of male hypogonadism should be based on symptoms and on serum testosterone levels. Diagnostically this relies on accurate testing and reference values. Our objective was to define the distribution of reference values and assays for free and total testosterone by clinical laboratories in the United States. Upper and lower reference values, assay methodology and source of published reference ranges were obtained from laboratories across the country. A standardized survey was reviewed with laboratory staff via telephone. Descriptive statistics were used to tabulate results. We surveyed a total of 120 laboratories in 47 states. Total testosterone was measured in house at 73% of laboratories. At the remaining laboratories studies were sent to larger centralized reference facilities. The mean ± SD lower reference value of total testosterone was 231 ± 46 ng/dl (range 160 to 300) and the mean upper limit was 850 ± 141 ng/dl (range 726 to 1,130). Only 9% of laboratories where in-house total testosterone testing was performed created a reference range unique to their region. Others validated the instrument recommended reference values in a small number of internal test samples. For free testosterone 82% of laboratories sent testing to larger centralized reference laboratories where equilibrium dialysis and/or liquid chromatography with mass spectrometry was done. The remaining laboratories used published algorithms to calculate serum free testosterone. Reference ranges for testosterone assays vary significantly among laboratories. The ranges are predominantly defined by limited population studies of men with unknown medical and reproductive histories. These poorly defined and variable reference values, especially the lower limit, affect how clinicians determine treatment. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Validation of reference genes for gene expression analysis in olive (Olea europaea) mesocarp tissue by quantitative real-time RT-PCR

    PubMed Central

    2014-01-01

    Background Gene expression analysis using quantitative reverse transcription PCR (qRT-PCR) is a robust method wherein the expression levels of target genes are normalised using internal control genes, known as reference genes, to derive changes in gene expression levels. Although reference genes have recently been suggested for olive tissues, combined/independent analysis on different cultivars has not yet been tested. Therefore, an assessment of reference genes was required to validate the recent findings and select stably expressed genes across different olive cultivars. Results A total of eight candidate reference genes [glyceraldehyde 3-phosphate dehydrogenase (GAPDH), serine/threonine-protein phosphatase catalytic subunit (PP2A), elongation factor 1 alpha (EF1-alpha), polyubiquitin (OUB2), aquaporin tonoplast intrinsic protein (TIP2), tubulin alpha (TUBA), 60S ribosomal protein L18-3 (60S RBP L18-3) and polypyrimidine tract-binding protein homolog 3 (PTB)] were chosen based on their stability in olive tissues as well as in other plants. Expression stability was examined by qRT-PCR across 12 biological samples, representing mesocarp tissues at various developmental stages in three different olive cultivars, Barnea, Frantoio and Picual, independently and together during the 2009 season with two software programs, GeNorm and BestKeeper. Both software packages identified GAPDH, EF1-alpha and PP2A as the three most stable reference genes across the three cultivars and in the cultivar, Barnea. GAPDH, EF1-alpha and 60S RBP L18-3 were found to be most stable reference genes in the cultivar Frantoio while 60S RBP L18-3, OUB2 and PP2A were found to be most stable reference genes in the cultivar Picual. Conclusions The analyses of expression stability of reference genes using qRT-PCR revealed that GAPDH, EF1-alpha, PP2A, 60S RBP L18-3 and OUB2 are suitable reference genes for expression analysis in developing Olea europaea mesocarp tissues, displaying the highest level of expression stability across three different olive cultivars, Barnea, Frantoio and Picual, however the combination of the three most stable reference genes do vary amongst individual cultivars. This study will provide guidance to other researchers to select reference genes for normalization against target genes by qPCR across tissues obtained from the mesocarp region of the olive fruit in the cultivars, Barnea, Frantoio and Picual. PMID:24884716

  8. Validation of miRNA genes suitable as reference genes in qPCR analyses of miRNA gene expression in Atlantic salmon (Salmo salar).

    PubMed

    Johansen, Ilona; Andreassen, Rune

    2014-12-23

    MicroRNAs (miRNAs) are an abundant class of endogenous small RNA molecules that downregulate gene expression at the post-transcriptional level. They play important roles by regulating genes that control multiple biological processes, and recent years there has been an increased interest in studying miRNA genes and miRNA gene expression. The most common method applied to study gene expression of single genes is quantitative PCR (qPCR). However, before expression of mature miRNAs can be studied robust qPCR methods (miRNA-qPCR) must be developed. This includes identification and validation of suitable reference genes. We are particularly interested in Atlantic salmon (Salmo salar). This is an economically important aquaculture species, but no reference genes dedicated for use in miRNA-qPCR methods has been validated for this species. Our aim was, therefore, to identify suitable reference genes for miRNA-qPCR methods in Salmo salar. We used a systematic approach where we utilized similar studies in other species, some biological criteria, results from deep sequencing of small RNAs and, finally, experimental validation of candidate reference genes by qPCR to identify the most suitable reference genes. Ssa-miR-25-3p was identified as most suitable single reference gene. The best combinations of two reference genes were ssa-miR-25-3p and ssa-miR-455-5p. These two genes were constitutively and stably expressed across many different tissues. Furthermore, infectious salmon anaemia did not seem to affect their expression levels. These genes were amplified with high specificity, good efficiency and the qPCR assays showed a good linearity when applying a simple cybergreen miRNA-PCR method using miRNA gene specific forward primers. We have identified suitable reference genes for miRNA-qPCR in Atlantic salmon. These results will greatly facilitate further studies on miRNA genes in this species. The reference genes identified are conserved genes that are identical in their mature sequence in many aquaculture species. Therefore, they may also be suitable as reference genes in other teleosts. Finally, the systematic approach used in our study successfully identified suitable reference genes, suggesting that this may be a useful strategy to apply in similar validation studies in other aquaculture species.

  9. [The role of reference laboratories in animal health programmes in South America].

    PubMed

    Bergmann, I E

    2003-08-01

    The contribution of the Panamerican Foot and Mouth Disease (FMD) Centre (PANAFTOSA), as an OIE (World organisation for animal health) regional reference laboratory for the diagnosis of FMD and vesicular stomatitis, and for the control of the FMD vaccine, has been of fundamental importance to the development, implementation and harmonisation of modern laboratory procedures in South America. The significance of the work conducted by PANAFTOSA is particularly obvious when one considers the two pillars on which eradication programmes are based, namely: a well-structured regional laboratory network, and the creation of a system which allows technology and new developments to be transferred to Member Countries as quickly and efficiently as possible. Over the past decade, PANAFTOSA has kept pace with the changing epidemiological situation on the continent, and with developments in the international political and economical situation. This has involved the strengthening of quality policies, and the elaboration and implementation of diagnostic tools that make for more thorough epidemiological analyses. The integration of PANAFTOSA into the network of national laboratories and its cooperation with technical and scientific institutes, universities and the private sector means that local needs can be met, thanks to the design and rapid implementation of methodological tools which are validated using internationally accepted criteria. This collaboration, which ensures harmonisation of laboratory tests and enhances the quality of national Veterinary Services, serves to promote greater equity, a prerequisite for regional eradication strategies and this in turn, helps to increase competitiveness in the region.

  10. Explicitly computing geodetic coordinates from Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Zeng, Huaien

    2013-04-01

    This paper presents a new form of quartic equation based on Lagrange's extremum law and a Groebner basis under the constraint that the geodetic height is the shortest distance between a given point and the reference ellipsoid. A very explicit and concise formulae of the quartic equation by Ferrari's line is found, which avoids the need of a good starting guess for iterative methods. A new explicit algorithm is then proposed to compute geodetic coordinates from Cartesian coordinates. The convergence region of the algorithm is investigated and the corresponding correct solution is given. Lastly, the algorithm is validated with numerical experiments.

  11. The NIH analytical methods and reference materials program for dietary supplements.

    PubMed

    Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M

    2007-09-01

    Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.

  12. Development and validation of electronic surveillance tool for acute kidney injury: A retrospective analysis.

    PubMed

    Ahmed, Adil; Vairavan, Srinivasan; Akhoundi, Abbasali; Wilson, Gregory; Chiofolo, Caitlyn; Chbat, Nicolas; Cartin-Ceba, Rodrigo; Li, Guangxi; Kashani, Kianoush

    2015-10-01

    Timely detection of acute kidney injury (AKI) facilitates prevention of its progress and potentially therapeutic interventions. The study objective is to develop and validate an electronic surveillance tool (AKI sniffer) to detect AKI in 2 independent retrospective cohorts of intensive care unit (ICU) patients. The primary aim is to compare the sensitivity, specificity, and positive and negative predictive values of AKI sniffer performance against a reference standard. This study is conducted in the ICUs of a tertiary care center. The derivation cohort study subjects were Olmsted County, MN, residents admitted to all Mayo Clinic ICUs from July 1, 2010, through December 31, 2010, and the validation cohort study subjects were all patients admitted to a Mayo Clinic, Rochester, campus medical/surgical ICU on January 12, 2010, through March 23, 2010. All included records were reviewed by 2 independent investigators who adjudicated AKI using the Acute Kidney Injury Network criteria; disagreements were resolved by a third reviewer. This constituted the reference standard. An electronic algorithm was developed; its precision and reliability were assessed in comparison with the reference standard in 2 separate cohorts, derivation and validation. Of 1466 screened patients, a total of 944 patients were included in the study: 482 for derivation and 462 for validation. Compared with the reference standard in the validation cohort, the sensitivity and specificity of the AKI sniffer were 88% and 96%, respectively. The Cohen κ (95% confidence interval) agreement between the electronic and the reference standard was 0.84 (0.78-0.89) and 0.85 (0.80-0.90) in the derivation and validation cohorts. Acute kidney injury can reliably and accurately be detected electronically in ICU patients. The presented method is applicable for both clinical (decision support) and research (enrollment for clinical trials) settings. Prospective validation is required. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. A data driven method for estimation of B(avail) and appK(D) using a single injection protocol with [¹¹C]raclopride in the mouse.

    PubMed

    Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude

    2014-10-01

    The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Application of correspondence analysis in the assessment of mine tailings dam breakage risk in the Mediterranean region.

    PubMed

    Salgueiro, Ana Rita; Pereira, Henrique Garcia; Rico, Maria-Teresa; Benito, Gerado; Díez-Herreo, Andrés

    2008-02-01

    A new statistical approach for preliminary risk evaluation of breakage in tailings dam is presented and illustrated by a case study regarding the Mediterranean region. The objective of the proposed method is to establish an empirical scale of risk, from which guidelines for prioritizing the collection of further specific information can be derived. The method relies on a historical database containing, in essence, two sets of qualitative data: the first set concerns the variables that are observable before the disaster (e.g., type and size of the dam, its location, and state of activity), and the second refers to the consequences of the disaster (e.g., failure type, sludge characteristics, fatalities categorization, and downstream range of damage). Based on a modified form of correspondence analysis, where the second set of attributes are projected as "supplementary variables" onto the axes provided by the eigenvalue decomposition of the matrix referring to the first set, a "qualitative regression" is performed, relating the variables to be predicted (contained in the second set) with the "predictors" (the observable variables). On the grounds of the previously derived relationship, the risk of breakage in a new case can be evaluated, given observable variables. The method was applied in a case study regarding a set of 13 test sites where the ranking of risk obtained was validated by expert knowledge. Once validated, the procedure was included in the final output of the e-EcoRisk UE project (A Regional Enterprise Network Decision-Support System for Environmental Risk and Disaster Management of Large-Scale Industrial Spills), allowing for a dynamic historical database updating and providing a prompt rough risk evaluation for a new case. The aim of this section of the global project is to provide a quantified context where failure cases occurred in the past for supporting analogue reasoning in preventing similar situations.

  15. Identification of reference genes and validation for gene expression studies in diverse axolotl (Ambystoma mexicanum) tissues.

    PubMed

    Guelke, Eileen; Bucan, Vesna; Liebsch, Christina; Lazaridis, Andrea; Radtke, Christine; Vogt, Peter M; Reimers, Kerstin

    2015-04-10

    For the precise quantitative RT-PCR normalization a set of valid reference genes is obligatory. Moreover have to be taken into concern the experimental conditions as they bias the regulation of reference genes. Up till now, no reference targets have been described for the axolotl (Ambystoma mexicanum). In a search in the public database SalSite for genetic information of the axolotl we identified fourteen presumptive reference genes, eleven of which were further tested for their gene expression stability. This study characterizes the expressional patterns of 11 putative endogenous control genes during axolotl limb regeneration and in an axolotl tissue panel. All 11 reference genes showed variable expression. Strikingly, ACTB was to be found most stable expressed in all comparative tissue groups, so we reason it to be suitable for all different kinds of axolotl tissue-type investigations. Moreover do we suggest GAPDH and RPLP0 as suitable for certain axolotl tissue analysis. When it comes to axolotl limb regeneration, a validated pair of reference genes is ODC and RPLP0. With these findings, new insights into axolotl gene expression profiling might be gained. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. New Formulation for the Viscosity of Propane

    NASA Astrophysics Data System (ADS)

    Vogel, Eckhard; Herrmann, Sebastian

    2016-12-01

    A new viscosity formulation for propane, using the reference equation of state for its thermodynamic properties by Lemmon et al. [J. Chem. Eng. Data 54, 3141 (2009)] and valid in the fluid region from the triple-point temperature to 650 K and pressures up to 100 MPa, is presented. At the beginning, a zero-density contribution and one for the critical enhancement, each based on the experimental data, were independently generated in parts. The higher-density contributions are correlated as a function of the reciprocal reduced temperature τ = Tc/T and of the reduced density δ = ρ/ρc (Tc—critical temperature, ρc—critical density). The final formulation includes 17 coefficients inferred by applying a state-of-the-art linear optimization algorithm. The evaluation and choice of the primary data sets are detailed due to its importance. The viscosity at low pressures p ≤ 0.2 MPa is represented with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 273 ≤ T/K ≤ 625. The expanded uncertainty in the vapor phase at subcritical temperatures T ≥ 273 K as well as in the supercritical thermodynamic region T ≤ 423 K at pressures p ≤ 30 MPa is assumed to be 1.5%. In the near-critical region (1.001 < 1/τ < 1.010 and 0.8 < δ < 1.2), the expanded uncertainty increases with decreasing temperature up to 3.0%. It is further increased to 4.0% in regions of less reliable primary data sets and to 6.0% in ranges in which no primary data are available but the equation of state is valid. Tables of viscosity computed for the new formulation are given in an Appendix for the single-phase region, for the vapor-liquid phase boundary, and for the near-critical region.

  17. A novel feature-tracking echocardiographic method for the quantitation of regional myocardial function: validation in an animal model of ischemia-reperfusion.

    PubMed

    Pirat, Bahar; Khoury, Dirar S; Hartley, Craig J; Tiller, Les; Rao, Liyun; Schulz, Daryl G; Nagueh, Sherif F; Zoghbi, William A

    2008-02-12

    The aim of this study was to validate a novel, angle-independent, feature-tracking method for the echocardiographic quantitation of regional function. A new echocardiographic method, Velocity Vector Imaging (VVI) (syngo Velocity Vector Imaging technology, Siemens Medical Solutions, Ultrasound Division, Mountain View, California), has been introduced, based on feature tracking-incorporating speckle and endocardial border tracking, that allows the quantitation of endocardial strain, strain rate (SR), and velocity. Seven dogs were studied during baseline, and various interventions causing alterations in regional function: dobutamine, 5-min coronary occlusion with reperfusion up to 1 h, followed by dobutamine and esmolol infusions. Echocardiographic images were acquired from short- and long-axis views of the left ventricle. Segment-length sonomicrometry crystals were used as the reference method. Changes in systolic strain in ischemic segments were tracked well with VVI during the different states of regional function. There was a good correlation between circumferential and longitudinal systolic strain by VVI and sonomicrometry (r = 0.88 and r = 0.83, respectively, p < 0.001). Strain measurements in the nonischemic basal segments also demonstrated a significant correlation between the 2 methods (r = 0.65, p < 0.001). Similarly, a significant relation was observed for circumferential and longitudinal SR between the 2 methods (r = 0.94, p < 0.001 and r = 0.90, p < 0.001, respectively). The endocardial velocity relation to changes in strain by sonomicrometry was weaker owing to significant cardiac translation. Velocity Vector Imaging, a new feature-tracking method, can accurately assess regional myocardial function at the endocardial level and is a promising clinical tool for the simultaneous quantification of regional and global myocardial function.

  18. Validation of Reference Genes for Real-Time Quantitative PCR (qPCR) Analysis of Avibacterium paragallinarum.

    PubMed

    Wen, Shuxiang; Chen, Xiaoling; Xu, Fuzhou; Sun, Huiling

    2016-01-01

    Real-time quantitative reverse transcription PCR (qRT-PCR) offers a robust method for measurement of gene expression levels. Selection of reliable reference gene(s) for gene expression study is conducive to reduce variations derived from different amounts of RNA and cDNA, the efficiency of the reverse transcriptase or polymerase enzymes. Until now reference genes identified for other members of the family Pasteurellaceae have not been validated for Avibacterium paragallinarum. The aim of this study was to validate nine reference genes of serovars A, B, and C strains of A. paragallinarum in different growth phase by qRT-PCR. Three of the most widely used statistical algorithms, geNorm, NormFinder and ΔCT method were used to evaluate the expression stability of reference genes. Data analyzed by overall rankings showed that in exponential and stationary phase of serovar A, the most stable reference genes were gyrA and atpD respectively; in exponential and stationary phase of serovar B, the most stable reference genes were atpD and recN respectively; in exponential and stationary phase of serovar C, the most stable reference genes were rpoB and recN respectively. This study provides recommendations for stable endogenous control genes for use in further studies involving measurement of gene expression levels.

  19. Simulation of runoff and nutrient export from a typical small watershed in China using the Hydrological Simulation Program-Fortran.

    PubMed

    Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin

    2015-05-01

    The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.

  20. Calmodulin Polymerase Chain Reaction–Restriction Fragment Length Polymorphism for Leishmania Identification and Typing

    PubMed Central

    Miranda, Aracelis; Samudio, Franklyn; González, Kadir; Saldaña, Azael; Brandão, Adeilton; Calzada, Jose E.

    2016-01-01

    A precise identification of Leishmania species involved in human infections has epidemiological and clinical importance. Herein, we describe a preliminary validation of a restriction fragment length polymorphism assay, based on the calmodulin intergenic spacer region, as a tool for detecting and typing Leishmania species. After calmodulin amplification, the enzyme HaeIII yielded a clear distinction between reference strains of Leishmania mexicana, Leishmania amazonensis, Leishmania infantum, Leishmania lainsoni, and the rest of the Viannia reference species analyzed. The closely related Viannia species: Leishmania braziliensis, Leishmania panamensis, and Leishmania guyanensis, are separated in a subsequent digestion step with different restriction enzymes. We have developed a more accessible molecular protocol for Leishmania identification/typing based on the exploitation of part of the calmodulin gene. This methodology has the potential to become an additional tool for Leishmania species characterization and taxonomy. PMID:27352873

  1. Combined striatal binding and cerebral influx analysis of dynamic 11C-raclopride PET improves early differentiation between multiple-system atrophy and Parkinson disease.

    PubMed

    Van Laere, Koen; Clerinx, Kristien; D'Hondt, Eduard; de Groot, Tjibbe; Vandenberghe, Wim

    2010-04-01

    Striatal dopamine D(2) receptor (D2R) PET has been proposed to differentiate between Parkinson disease (PD) and multiple-system atrophy with predominant parkinsonism (MSA-P). However, considerable overlap in striatal D(2) binding may exist between PD and MSA-P. It has been shown that imaging of neuronal activity, as determined by metabolism or perfusion, can also help distinguish PD from MSA-P. We investigated whether the differential diagnostic value of (11)C-raclopride PET could be improved by dynamic scan analysis combining D2R binding and regional tracer influx. (11)C-raclopride PET was performed in 9 MSA-P patients (mean age +/- SD, 56.2 +/- 10.2 y; disease duration, 2.9 +/- 0.8 y; median Hoehn-Yahr score, 3), 10 PD patients (mean age +/- SD, 65.7 +/- 8.1 y; disease duration, 3.3 +/- 1.5 y; median Hoehn-Yahr score, 1.5), and 10 healthy controls (mean age +/- SD, 61.6 +/- 6.5 y). Diagnosis was obtained after prolonged follow-up (MSA-P, 5.5 +/- 2.0 y; PD, 6.0 +/- 2.3 y) using validated clinical criteria. Spatially normalized parametric images of binding potential (BP) and local influx ratio (R(1) = K(1)/K'(1)) of (11)C-raclopride were obtained using a voxelwise reference tissue model with occipital cortex as reference region. Stepwise forward discriminant analysis with cross-validation, with and without the inclusion of regional R(1) values, was performed using a predefined volume-of-interest template. Using conventional BP values, we correctly classified 65.5% (all values given with cross-validation) of 29 cases only. The combination of BP and R(1) information increased discrimination accuracy to 79.3%. When healthy controls were not included and patients only were considered, BP information alone discriminated PD and MSA-P in 84.2% of cases, but the combination with R(1) data increased accuracy to 100%. Discriminant analysis using combined striatal D2R BP and cerebral influx ratio information of a single dynamic (11)C-raclopride PET scan distinguishes MSA-P and PD patients with high accuracy and is superior to conventional methods of striatal D2R binding analysis.

  2. Validation of reference genes for quantitative gene expression analysis in experimental epilepsy.

    PubMed

    Sadangi, Chinmaya; Rosenow, Felix; Norwood, Braxton A

    2017-12-01

    To grasp the molecular mechanisms and pathophysiology underlying epilepsy development (epileptogenesis) and epilepsy itself, it is important to understand the gene expression changes that occur during these phases. Quantitative real-time polymerase chain reaction (qPCR) is a technique that rapidly and accurately determines gene expression changes. It is crucial, however, that stable reference genes are selected for each experimental condition to ensure that accurate values are obtained for genes of interest. If reference genes are unstably expressed, this can lead to inaccurate data and erroneous conclusions. To date, epilepsy studies have used mostly single, nonvalidated reference genes. This is the first study to systematically evaluate reference genes in male Sprague-Dawley rat models of epilepsy. We assessed 15 potential reference genes in hippocampal tissue obtained from 2 different models during epileptogenesis, 1 model during chronic epilepsy, and a model of noninjurious seizures. Reference gene ranking varied between models and also differed between epileptogenesis and chronic epilepsy time points. There was also some variance between the four mathematical models used to rank reference genes. Notably, we found novel reference genes to be more stably expressed than those most often used in experimental epilepsy studies. The consequence of these findings is that reference genes suitable for one epilepsy model may not be appropriate for others and that reference genes can change over time. It is, therefore, critically important to validate potential reference genes before using them as normalizing factors in expression analysis in order to ensure accurate, valid results. © 2017 Wiley Periodicals, Inc.

  3. Comparative genomics approach to detecting split-coding regions in a low-coverage genome: lessons from the chimaera Callorhinchus milii (Holocephali, Chondrichthyes).

    PubMed

    Dessimoz, Christophe; Zoller, Stefan; Manousaki, Tereza; Qiu, Huan; Meyer, Axel; Kuraku, Shigehiro

    2011-09-01

    Recent development of deep sequencing technologies has facilitated de novo genome sequencing projects, now conducted even by individual laboratories. However, this will yield more and more genome sequences that are not well assembled, and will hinder thorough annotation when no closely related reference genome is available. One of the challenging issues is the identification of protein-coding sequences split into multiple unassembled genomic segments, which can confound orthology assignment and various laboratory experiments requiring the identification of individual genes. In this study, using the genome of a cartilaginous fish, Callorhinchus milii, as test case, we performed gene prediction using a model specifically trained for this genome. We implemented an algorithm, designated ESPRIT, to identify possible linkages between multiple protein-coding portions derived from a single genomic locus split into multiple unassembled genomic segments. We developed a validation framework based on an artificially fragmented human genome, improvements between early and recent mouse genome assemblies, comparison with experimentally validated sequences from GenBank, and phylogenetic analyses. Our strategy provided insights into practical solutions for efficient annotation of only partially sequenced (low-coverage) genomes. To our knowledge, our study is the first formulation of a method to link unassembled genomic segments based on proteomes of relatively distantly related species as references.

  4. Comparative genomics approach to detecting split-coding regions in a low-coverage genome: lessons from the chimaera Callorhinchus milii (Holocephali, Chondrichthyes)

    PubMed Central

    Zoller, Stefan; Manousaki, Tereza; Qiu, Huan; Meyer, Axel; Kuraku, Shigehiro

    2011-01-01

    Recent development of deep sequencing technologies has facilitated de novo genome sequencing projects, now conducted even by individual laboratories. However, this will yield more and more genome sequences that are not well assembled, and will hinder thorough annotation when no closely related reference genome is available. One of the challenging issues is the identification of protein-coding sequences split into multiple unassembled genomic segments, which can confound orthology assignment and various laboratory experiments requiring the identification of individual genes. In this study, using the genome of a cartilaginous fish, Callorhinchus milii, as test case, we performed gene prediction using a model specifically trained for this genome. We implemented an algorithm, designated ESPRIT, to identify possible linkages between multiple protein-coding portions derived from a single genomic locus split into multiple unassembled genomic segments. We developed a validation framework based on an artificially fragmented human genome, improvements between early and recent mouse genome assemblies, comparison with experimentally validated sequences from GenBank, and phylogenetic analyses. Our strategy provided insights into practical solutions for efficient annotation of only partially sequenced (low-coverage) genomes. To our knowledge, our study is the first formulation of a method to link unassembled genomic segments based on proteomes of relatively distantly related species as references. PMID:21712341

  5. Assessment of brain reference genes for RT-qPCR studies in neurodegenerative diseases

    PubMed Central

    Rydbirk, Rasmus; Folke, Jonas; Winge, Kristian; Aznar, Susana; Pakkenberg, Bente; Brudek, Tomasz

    2016-01-01

    Evaluation of gene expression levels by reverse transcription quantitative real-time PCR (RT-qPCR) has for many years been the favourite approach for discovering disease-associated alterations. Normalization of results to stably expressed reference genes (RGs) is pivotal to obtain reliable results. This is especially important in relation to neurodegenerative diseases where disease-related structural changes may affect the most commonly used RGs. We analysed 15 candidate RGs in 98 brain samples from two brain regions from Alzheimer’s disease (AD), Parkinson’s disease (PD), Multiple System Atrophy, and Progressive Supranuclear Palsy patients. Using RefFinder, a web-based tool for evaluating RG stability, we identified the most stable RGs to be UBE2D2, CYC1, and RPL13 which we recommend for future RT-qPCR studies on human brain tissue from these patients. None of the investigated genes were affected by experimental variables such as RIN, PMI, or age. Findings were further validated by expression analyses of a target gene GSK3B, known to be affected by AD and PD. We obtained high variations in GSK3B levels when contrasting the results using different sets of common RG underlining the importance of a priori validation of RGs for RT-qPCR studies. PMID:27853238

  6. Assessment of brain reference genes for RT-qPCR studies in neurodegenerative diseases.

    PubMed

    Rydbirk, Rasmus; Folke, Jonas; Winge, Kristian; Aznar, Susana; Pakkenberg, Bente; Brudek, Tomasz

    2016-11-17

    Evaluation of gene expression levels by reverse transcription quantitative real-time PCR (RT-qPCR) has for many years been the favourite approach for discovering disease-associated alterations. Normalization of results to stably expressed reference genes (RGs) is pivotal to obtain reliable results. This is especially important in relation to neurodegenerative diseases where disease-related structural changes may affect the most commonly used RGs. We analysed 15 candidate RGs in 98 brain samples from two brain regions from Alzheimer's disease (AD), Parkinson's disease (PD), Multiple System Atrophy, and Progressive Supranuclear Palsy patients. Using RefFinder, a web-based tool for evaluating RG stability, we identified the most stable RGs to be UBE2D2, CYC1, and RPL13 which we recommend for future RT-qPCR studies on human brain tissue from these patients. None of the investigated genes were affected by experimental variables such as RIN, PMI, or age. Findings were further validated by expression analyses of a target gene GSK3B, known to be affected by AD and PD. We obtained high variations in GSK3B levels when contrasting the results using different sets of common RG underlining the importance of a priori validation of RGs for RT-qPCR studies.

  7. Internal dosimetry with the Monte Carlo code GATE: validation using the ICRP/ICRU female reference computational model

    NASA Astrophysics Data System (ADS)

    Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel

    2017-03-01

    The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.

  8. Linkage maps of the Atlantic salmon (Salmo salar) genome derived from RAD sequencing

    PubMed Central

    2014-01-01

    Background Genetic linkage maps are useful tools for mapping quantitative trait loci (QTL) influencing variation in traits of interest in a population. Genotyping-by-sequencing approaches such as Restriction-site Associated DNA sequencing (RAD-Seq) now enable the rapid discovery and genotyping of genome-wide SNP markers suitable for the development of dense SNP linkage maps, including in non-model organisms such as Atlantic salmon (Salmo salar). This paper describes the development and characterisation of a high density SNP linkage map based on SbfI RAD-Seq SNP markers from two Atlantic salmon reference families. Results Approximately 6,000 SNPs were assigned to 29 linkage groups, utilising markers from known genomic locations as anchors. Linkage maps were then constructed for the four mapping parents separately. Overall map lengths were comparable between male and female parents, but the distribution of the SNPs showed sex-specific patterns with a greater degree of clustering of sire-segregating SNPs to single chromosome regions. The maps were integrated with the Atlantic salmon draft reference genome contigs, allowing the unique assignment of ~4,000 contigs to a linkage group. 112 genome contigs mapped to two or more linkage groups, highlighting regions of putative homeology within the salmon genome. A comparative genomics analysis with the stickleback reference genome identified putative genes closely linked to approximately half of the ordered SNPs and demonstrated blocks of orthology between the Atlantic salmon and stickleback genomes. A subset of 47 RAD-Seq SNPs were successfully validated using a high-throughput genotyping assay, with a correspondence of 97% between the two assays. Conclusions This Atlantic salmon RAD-Seq linkage map is a resource for salmonid genomics research as genotyping-by-sequencing becomes increasingly common. This is aided by the integration of the SbfI RAD-Seq SNPs with existing reference maps and the draft reference genome, as well as the identification of putative genes proximal to the SNPs. Differences in the distribution of recombination events between the sexes is evident, and regions of homeology have been identified which are reflective of the recent salmonid whole genome duplication. PMID:24571138

  9. Combined Mass Spectrometry Imaging and Top-down Microproteomics Reveals Evidence of a Hidden Proteome in Ovarian Cancer.

    PubMed

    Delcourt, Vivian; Franck, Julien; Leblanc, Eric; Narducci, Fabrice; Robin, Yves-Marie; Gimeno, Jean-Pascal; Quanico, Jusal; Wisztorski, Maxence; Kobeissy, Firas; Jacques, Jean-François; Roucou, Xavier; Salzet, Michel; Fournier, Isabelle

    2017-07-01

    Recently, it was demonstrated that proteins can be translated from alternative open reading frames (altORFs), increasing the size of the actual proteome. Top-down mass spectrometry-based proteomics allows the identification of intact proteins containing post-translational modifications (PTMs) as well as truncated forms translated from reference ORFs or altORFs. Top-down tissue microproteomics was applied on benign, tumor and necrotic-fibrotic regions of serous ovarian cancer biopsies, identifying proteins exhibiting region-specific cellular localization and PTMs. The regions of interest (ROIs) were determined by MALDI mass spectrometry imaging and spatial segmentation. Analysis with a customized protein sequence database containing reference and alternative proteins (altprots) identified 15 altprots, including alternative G protein nucleolar 1 (AltGNL1) found in the tumor, and translated from an altORF nested within the GNL1 canonical coding sequence. Co-expression of GNL1 and altGNL1 was validated by transfection in HEK293 and HeLa cells with an expression plasmid containing a GNL1-FLAG (V5) construct. Western blot and immunofluorescence experiments confirmed constitutive co-expression of altGNL1-V5 with GNL1-FLAG. Taken together, our approach provides means to evaluate protein changes in the case of serous ovarian cancer, allowing the detection of potential markers that have never been considered. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  10. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    PubMed

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. Resting-state functional brain connectivity: lessons from functional near-infrared spectroscopy.

    PubMed

    Niu, Haijing; He, Yong

    2014-04-01

    Resting-state functional near-infrared spectroscopy (R-fNIRS) is an active area of interest and is currently attracting considerable attention as a new imaging tool for the study of resting-state brain function. Using variations in hemodynamic concentration signals, R-fNIRS measures the brain's low-frequency spontaneous neural activity, combining the advantages of portability, low-cost, high temporal sampling rate and less physical burden to participants. The temporal synchronization of spontaneous neuronal activity in anatomically separated regions is referred to as resting-state functional connectivity (RSFC). In the past several years, an increasing body of R-fNIRS RSFC studies has led to many important findings about functional integration among local or whole-brain regions by measuring inter-regional temporal synchronization. Here, we summarize recent advances made in the R-fNIRS RSFC methodologies, from the detection of RSFC (e.g., seed-based correlation analysis, independent component analysis, whole-brain correlation analysis, and graph-theoretical topological analysis), to the assessment of RSFC performance (e.g., reliability, repeatability, and validity), to the application of RSFC in studying normal development and brain disorders. The literature reviewed here suggests that RSFC analyses based on R-fNIRS data are valid and reliable for the study of brain function in healthy and diseased populations, thus providing a promising imaging tool for cognitive science and clinics.

  12. Statistical considerations for harmonization of the global multicenter study on reference values.

    PubMed

    Ichihara, Kiyoshi

    2014-05-15

    The global multicenter study on reference values coordinated by the Committee on Reference Intervals and Decision Limits (C-RIDL) of the IFCC was launched in December 2011, targeting 45 commonly tested analytes with the following objectives: 1) to derive reference intervals (RIs) country by country using a common protocol, and 2) to explore regionality/ethnicity of reference values by aligning test results among the countries. To achieve these objectives, it is crucial to harmonize 1) the protocol for recruitment and sampling, 2) statistical procedures for deriving the RI, and 3) test results through measurement of a panel of sera in common. For harmonized recruitment, very lenient inclusion/exclusion criteria were adopted in view of differences in interpretation of what constitutes healthiness by different cultures and investigators. This policy may require secondary exclusion of individuals according to the standard of each country at the time of deriving RIs. An iterative optimization procedure, called the latent abnormal values exclusion (LAVE) method, can be applied to automate the process of refining the choice of reference individuals. For global comparison of reference values, test results must be harmonized, based on the among-country, pair-wise linear relationships of test values for the panel. Traceability of reference values can be ensured based on values assigned indirectly to the panel through collaborative measurement of certified reference materials. The validity of the adopted strategies is discussed in this article, based on interim results obtained to date from five countries. Special considerations are made for dissociation of RIs by parametric and nonparametric methods and between-country difference in the effect of body mass index on reference values. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Selection and validation of reference genes for gene expression analysis in apomictic and sexual Cenchrus ciliaris

    PubMed Central

    2013-01-01

    Background Apomixis is a naturally occurring asexual mode of seed reproduction resulting in offspring genetically identical to the maternal plant. Identifying differential gene expression patterns between apomictic and sexual plants is valuable to help deconstruct the trait. Quantitative RT-PCR (qRT-PCR) is a popular method for analyzing gene expression. Normalizing gene expression data using proper reference genes which show stable expression under investigated conditions is critical in qRT-PCR analysis. We used qRT-PCR to validate expression and stability of six potential reference genes (EF1alpha, EIF4A, UBCE, GAPDH, ACT2 and TUBA) in vegetative and reproductive tissues of B-2S and B-12-9 accessions of C. ciliaris. Findings Among tissue types evaluated, EF1alpha showed the highest level of expression while TUBA showed the lowest. When all tissue types were evaluated and compared between genotypes, EIF4A was the most stable reference gene. Gene expression stability for specific ovary stages of B-2S and B-12-9 was also determined. Except for TUBA, all other tested reference genes could be used for any stage-specific ovary tissue normalization, irrespective of the mode of reproduction. Conclusion Our gene expression stability assay using six reference genes, in sexual and apomictic accessions of C. ciliaris, suggests that EIF4A is the most stable gene across all tissue types analyzed. All other tested reference genes, with the exception of TUBA, could be used for gene expression comparison studies between sexual and apomictic ovaries over multiple developmental stages. This reference gene validation data in C. ciliaris will serve as an important base for future apomixis-related transcriptome data validation. PMID:24083672

  14. Evaluation and Validation of Reference Genes for qRT-PCR Normalization in Frankliniella occidentalis (Thysanoptera:Thripidae)

    PubMed Central

    Zheng, Yu-Tao; Li, Hong-Bo; Lu, Ming-Xing; Du, Yu-Zhou

    2014-01-01

    Quantitative real time PCR (qRT-PCR) has emerged as a reliable and reproducible technique for studying gene expression analysis. For accurate results, the normalization of data with reference genes is particularly essential. Once the transcriptome sequencing of Frankliniella occidentalis was completed, numerous unigenes were identified and annotated. Unfortunately, there are no studies on the stability of reference genes used in F. occidentalis. In this work, seven candidate reference genes, including actin, 18S rRNA, H3, tubulin, GAPDH, EF-1 and RPL32, were evaluated for their suitability as normalization genes under different experimental conditions using the statistical software programs BestKeeper, geNorm, Normfinder and the comparative ΔCt method. Because the rankings of the reference genes provided by each of the four programs were different, we chose a user-friendly web-based comprehensive tool RefFinder to get the final ranking. The result demonstrated that EF-1 and RPL32 displayed the most stable expression in different developmental stages; RPL32 and GAPDH showed the most stable expression at high temperatures, while 18S and EF-1 exhibited the most stable expression at low temperatures. In this study, we validated the suitable reference genes in F. occidentalis for gene expression profiling under different experimental conditions. The choice of internal standard is very important in the normalization of the target gene expression levels, thus validating and selecting the best genes will help improve the quality of gene expression data of F. occidentalis. What is more, these validated reference genes could serve as the basis for the selection of candidate reference genes in other insects. PMID:25356721

  15. Evaluation and validation of reference genes for qRT-PCR normalization in Frankliniella occidentalis (Thysanoptera: Thripidae).

    PubMed

    Zheng, Yu-Tao; Li, Hong-Bo; Lu, Ming-Xing; Du, Yu-Zhou

    2014-01-01

    Quantitative real time PCR (qRT-PCR) has emerged as a reliable and reproducible technique for studying gene expression analysis. For accurate results, the normalization of data with reference genes is particularly essential. Once the transcriptome sequencing of Frankliniella occidentalis was completed, numerous unigenes were identified and annotated. Unfortunately, there are no studies on the stability of reference genes used in F. occidentalis. In this work, seven candidate reference genes, including actin, 18S rRNA, H3, tubulin, GAPDH, EF-1 and RPL32, were evaluated for their suitability as normalization genes under different experimental conditions using the statistical software programs BestKeeper, geNorm, Normfinder and the comparative ΔCt method. Because the rankings of the reference genes provided by each of the four programs were different, we chose a user-friendly web-based comprehensive tool RefFinder to get the final ranking. The result demonstrated that EF-1 and RPL32 displayed the most stable expression in different developmental stages; RPL32 and GAPDH showed the most stable expression at high temperatures, while 18S and EF-1 exhibited the most stable expression at low temperatures. In this study, we validated the suitable reference genes in F. occidentalis for gene expression profiling under different experimental conditions. The choice of internal standard is very important in the normalization of the target gene expression levels, thus validating and selecting the best genes will help improve the quality of gene expression data of F. occidentalis. What is more, these validated reference genes could serve as the basis for the selection of candidate reference genes in other insects.

  16. Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

    NASA Astrophysics Data System (ADS)

    DeGrandchamp, Joseph B.; Whisenant, Jennifer G.; Arlinghaus, Lori R.; Abramson, V. G.; Yankeelov, Thomas E.; Cárdenas-Rodríguez, Julio

    2016-03-01

    The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.

  17. Validity of self-reported mechanical demands for occupational epidemiologic research of musculoskeletal disorders

    PubMed Central

    Barrero, Lope H; Katz, Jeffrey N; Dennerlein, Jack T

    2012-01-01

    Objectives To describe the relation of the measured validity of self-reported mechanical demands (self-reports) with the quality of validity assessments and the variability of the assessed exposure in the study population. Methods We searched for original articles, published between 1990 and 2008, reporting the validity of self-reports in three major databases: EBSCOhost, Web of Science, and PubMed. Identified assessments were classified by methodological characteristics (eg, type of self-report and reference method) and exposure dimension was measured. We also classified assessments by the degree of comparability between the self-report and the employed reference method, and the variability of the assessed exposure in the study population. Finally, we examined the association of the published validity (r) with this degree of comparability, as well as with the variability of the exposure variable in the study population. Results Of the 490 assessments identified, 75% used observation-based reference measures and 55% tested self-reports of posture duration and movement frequency. Frequently, validity studies did not report demographic information (eg, education, age, and gender distribution). Among assessments reporting correlations as a measure of validity, studies with a better match between the self-report and the reference method, and studies conducted in more heterogeneous populations tended to report higher correlations [odds ratio (OR) 2.03, 95% confidence interval (95% CI) 0.89–4.65 and OR 1.60, 95% CI 0.96–2.61, respectively]. Conclusions The reported data support the hypothesis that validity depends on study-specific factors often not examined. Experimentally manipulating the testing setting could lead to a better understanding of the capabilities and limitations of self-reported information. PMID:19562235

  18. Development and validation of the Axiom(®) Apple480K SNP genotyping array.

    PubMed

    Bianco, Luca; Cestaro, Alessandro; Linsmith, Gareth; Muranty, Hélène; Denancé, Caroline; Théron, Anthony; Poncet, Charles; Micheletti, Diego; Kerschbamer, Emanuela; Di Pierro, Erica A; Larger, Simone; Pindo, Massimo; Van de Weg, Eric; Davassi, Alessandro; Laurens, François; Velasco, Riccardo; Durel, Charles-Eric; Troggio, Michela

    2016-04-01

    Cultivated apple (Malus × domestica Borkh.) is one of the most important fruit crops in temperate regions, and has great economic and cultural value. The apple genome is highly heterozygous and has undergone a recent duplication which, combined with a rapid linkage disequilibrium decay, makes it difficult to perform genome-wide association (GWA) studies. Single nucleotide polymorphism arrays offer highly multiplexed assays at a relatively low cost per data point and can be a valid tool for the identification of the markers associated with traits of interest. Here, we describe the development and validation of a 487K SNP Affymetrix Axiom(®) genotyping array for apple and discuss its potential applications. The array has been built from the high-depth resequencing of 63 different cultivars covering most of the genetic diversity in cultivated apple. The SNPs were chosen by applying a focal points approach to enrich genic regions, but also to reach a uniform coverage of non-genic regions. A total of 1324 apple accessions, including the 92 progenies of two mapping populations, have been genotyped with the Axiom(®) Apple480K to assess the effectiveness of the array. A large majority of SNPs (359 994 or 74%) fell in the stringent class of poly high resolution polymorphisms. We also devised a filtering procedure to identify a subset of 275K very robust markers that can be safely used for germplasm surveys in apple. The Axiom(®) Apple480K has now been commercially released both for public and proprietary use and will likely be a reference tool for GWA studies in apple. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  19. Characterizing diurnal and seasonal cycles in monsoon systems from TRMM and CEOP observations

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.

    2006-01-01

    The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2deg x 2.5deg latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.

  20. Characterizing Diurnal and Seasonal Cycles in Monsoon Systems from TRMM and CEOP Observations

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.

    2007-01-01

    The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2 deg x 2.5 deg. latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.

  1. The Amsterdam wrist rules: the multicenter prospective derivation and external validation of a clinical decision rule for the use of radiography in acute wrist trauma.

    PubMed

    Walenkamp, Monique M J; Bentohami, Abdelali; Slaar, Annelie; Beerekamp, M Suzan H; Maas, Mario; Jager, L Cara; Sosef, Nico L; van Velde, Romuald; Ultee, Jan M; Steyerberg, Ewout W; Goslings, J Carel; Schep, Niels W L

    2015-12-18

    Although only 39 % of patients with wrist trauma have sustained a fracture, the majority of patients is routinely referred for radiography. The purpose of this study was to derive and externally validate a clinical decision rule that selects patients with acute wrist trauma in the Emergency Department (ED) for radiography. This multicenter prospective study consisted of three components: (1) derivation of a clinical prediction model for detecting wrist fractures in patients following wrist trauma; (2) external validation of this model; and (3) design of a clinical decision rule. The study was conducted in the EDs of five Dutch hospitals: one academic hospital (derivation cohort) and four regional hospitals (external validation cohort). We included all adult patients with acute wrist trauma. The main outcome was fracture of the wrist (distal radius, distal ulna or carpal bones) diagnosed on conventional X-rays. A total of 882 patients were analyzed; 487 in the derivation cohort and 395 in the validation cohort. We derived a clinical prediction model with eight variables: age; sex, swelling of the wrist; swelling of the anatomical snuffbox, visible deformation; distal radius tender to palpation; pain on radial deviation and painful axial compression of the thumb. The Area Under the Curve at external validation of this model was 0.81 (95 % CI: 0.77-0.85). The sensitivity and specificity of the Amsterdam Wrist Rules (AWR) in the external validation cohort were 98 % (95 % CI: 95-99 %) and 21 % (95 % CI: 15 %-28). The negative predictive value was 90 % (95 % CI: 81-99 %). The Amsterdam Wrist Rules is a clinical prediction rule with a high sensitivity and negative predictive value for fractures of the wrist. Although external validation showed low specificity and 100 % sensitivity could not be achieved, the Amsterdam Wrist Rules can provide physicians in the Emergency Department with a useful screening tool to select patients with acute wrist trauma for radiography. The upcoming implementation study will further reveal the impact of the Amsterdam Wrist Rules on the anticipated reduction of X-rays requested, missed fractures, Emergency Department waiting times and health care costs. This study was registered in the Dutch Trial Registry, reference number NTR2544 on October 1(st), 2010.

  2. Divergence of actual and reference evapotranspiration observations for irrigated sugarcane with windy tropical conditions

    NASA Astrophysics Data System (ADS)

    Anderson, R. G.; Wang, D.; Tirado-Corbalá, R.; Zhang, H.; Ayars, J. E.

    2015-01-01

    Standardized reference evapotranspiration (ET) and ecosystem-specific vegetation coefficients are frequently used to estimate actual ET. However, equations for calculating reference ET have not been well validated in tropical environments. We measured ET (ETEC) using eddy covariance (EC) towers at two irrigated sugarcane fields on the leeward (dry) side of Maui, Hawaii, USA in contrasting climates. We calculated reference ET at the fields using the short (ET0) and tall (ETr) vegetation versions of the American Society for Civil Engineers (ASCE) equation. The ASCE equations were compared to the Priestley-Taylor ET (ETPT) and ETEC. Reference ET from the ASCE approaches exceeded ETEC during the mid-period (when vegetation coefficients suggest ETEC should exceed reference ET). At the windier tower site, cumulative ETr exceeded ETEC by 854 mm over the course of the mid-period (267 days). At the less windy site, mid-period ETr still exceeded ETEC, but the difference was smaller (443 mm). At both sites, ETPT approximated mid-period ETEC more closely than the ASCE equations ((ETPT-ETEC) < 170 mm). Analysis of applied water and precipitation, soil moisture, leaf stomatal resistance, and canopy cover suggest that the lower observed ETEC was not the result of water stress or reduced vegetation cover. Use of a custom-calibrated bulk canopy resistance improved the reference ET estimate and reduced seasonal ET discrepancy relative to ETPT and ETEC in the less windy field and had mixed performance in the windier field. These divergences suggest that modifications to reference ET equations may be warranted in some tropical regions.

  3. Divergence of reference evapotranspiration observations with windy tropical conditions

    NASA Astrophysics Data System (ADS)

    Anderson, R. G.; Wang, D.; Tirado-Corbalá, R.; Zhang, H.; Ayars, J. E.

    2014-06-01

    Standardized reference evapotranspiration (ET) and ecosystem-specific vegetation coefficients are frequently used to estimate actual ET. However, equations for calculating reference ET have not been well validated in tropical environments. We measured ET (ETEC) using Eddy Covariance (EC) towers at two irrigated sugarcane fields on the leeward (dry) side of Maui, Hawaii, USA in contrasting climates. We calculated reference ET at the fields using the short (ET0) and tall (ETr) vegetation versions of the American Society for Civil Engineers (ASCE) equation. The ASCE equations were compared to the Priestley-Taylor ET (ETPT) and ETEC. Reference ET from the ASCE approaches exceeded ETEC during the mid-period (when vegetation coefficients suggest ETEC should exceed reference ET). At the windier tower site, cumulative ETr exceeded ETEC by 854 mm over the course of the mid-period (267 days). At the less windy site, mid-period ETr still exceeded ETEC, but the difference was smaller (443 mm). At both sites, ETPT approximated mid-period ETEC more closely than the ASCE equations ((ETPT-ETEC) < 170 mm). Analysis of applied water and precipitation, soil moisture, leaf stomatal resistance, and canopy cover suggest that the lower observed ETEC was not the result of water stress or reduced vegetation cover. Use of a custom calibrated bulk canopy resistance improved the reference ET estimate and reduced seasonal ET discrepancy relative to ETPT and ETEC for the less windy field and had mixed performance at the windier field. These divergences suggest that modifications to reference ET equations may be warranted in some tropical regions.

  4. CLSI-based transference of the CALIPER database of pediatric reference intervals from Abbott to Beckman, Ortho, Roche and Siemens Clinical Chemistry Assays: direct validation using reference samples from the CALIPER cohort.

    PubMed

    Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow

    2013-09-01

    The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Identification of appropriate reference genes for human mesenchymal stem cell analysis by quantitative real-time PCR.

    PubMed

    Li, Xiuying; Yang, Qiwei; Bai, Jinping; Xuan, Yali; Wang, Yimin

    2015-01-01

    Normalization to a reference gene is the method of choice for quantitative reverse transcription-PCR (RT-qPCR) analysis. The stability of reference genes is critical for accurate experimental results and conclusions. We have evaluated the expression stability of eight commonly used reference genes found in four different human mesenchymal stem cells (MSC). Using geNorm, NormFinder and BestKeeper algorithms, we show that beta-2-microglobulin and peptidyl-prolylisomerase A were the optimal reference genes for normalizing RT-qPCR data obtained from MSC, whereas the TATA box binding protein was not suitable due to its extensive variability in expression. Our findings emphasize the significance of validating reference genes for qPCR analyses. We offer a short list of reference genes to use for normalization and recommend some commercially-available software programs as a rapid approach to validate reference genes. We also demonstrate that the two reference genes, β-actin and glyceraldehyde-3-phosphate dehydrogenase, are frequently used are not always successful in many cases.

  6. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the retrieval errors to the total uncertainty estimates including the CMU in the validation. We find that accounting for CMU increases the fraction of consistent observations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marques da Silva, A; Narciso, L

    Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in themore » liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve limitations in the segmentation step, mainly when background region has higher activity compared to kidneys. Financial support by CAPES.« less

  8. Development of “OQALE” Based Reference Module for School Geometry Subject and Analysis of Mathematical Creative Thinking Skills

    NASA Astrophysics Data System (ADS)

    Wulandari, N. A. D.; Sukestiyarno, Y. L.

    2017-04-01

    This research aims to develop an OQALE based reference module for school geometry subject that meets the criteria of a valid and practical. OQALE approach is learning by of O = observation, Q = question, A = Analyze, L = Logic, E = Express. Geometry subject presented in the module are a triangle, the Pythagorean theorem, and rectangular. Mathematical skills of creative thinking shown from four aspects: fluency, flexibility, originality and elaboration. Research procedures in the development of reference module using a strategy of the investigation and development described by [2], which is limited to the sixth stage is leading field testing. The focus of this research is to develop a reference module that is valid, practical and able to increase the mathematical creative thinking skills of students. The testing is limited to three teachers, nine students and two mathematic readers using purposive sampling technique. The data validity, practicality, and creative thinking skills upgrading collected through questionnaires, observations, and interviews and analysed with a valid test, practical test, gain test and qualitative descriptive. The results were obtained (1) the validity of the module = 4.52, which is 4.20 ≤ Vm< 5.00 included in the category of very valid; (2) the results of the questionnaire responses of teachers = 4.53, which is 4.20 ≤ Rg< 5.00 included in the category of very good; (3) the results of the survey responses of students = 3.13, which is 2.80 ≤ Rpd< 3.40 included in the category of good with an average percentage of 78%; and (4) increasing skills of creative thinking mathematically nine students through the test of the gain included in the high and medium category. The conclusions of this research are the generated OQALE based reference module for school geometry subjectis valid and practical.

  9. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    NASA Astrophysics Data System (ADS)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the validation test results is now performed to find the best features of both datasets. The study shows the differences between the EN4 and CORA validation results. It highlights the complementarity between the EN4 and CORA higher order tests. The design of the CORA and EN4 validation charts is discussed to understand how a different approach on the dataset scope can lead to differences in data validation. The new validation chart of the Copernicus Marine Service dataset is presented.

  10. Obstetric care providers are able to assess psychosocial risks, identify and refer high-risk pregnant women: validation of a short assessment tool - the KINDEX Greek version.

    PubMed

    Spyridou, Andria; Schauer, Maggie; Ruf-Leuschner, Martina

    2015-02-21

    Prenatal assessment for psychosocial risk factors and prevention and intervention is scarce and, in most cases, nonexistent in obstetrical care. In this study we aimed to evaluate if the KINDEX, a short instrument developed in Germany, is a useful tool in the hands of non-trained medical staff, in order to identify and refer women in psychosocial risk to the adequate mental health and social services. We also examined the criterion-related concurrent validity of the tool through a validation interview carried out by an expert clinical psychologist. Our final objective was to achieve the cultural adaptation of the KINDEX Greek Version and to offer a valid tool for the psychosocial risk assessment to the obstetric care providers. Two obstetricians and five midwives carried out 93 KINDEX interviews (duration 20 minutes) with pregnant women to assess psychosocial risk factors present during pregnancy. Afterwards they referred women who they identified having two or more psychosocial risk factors to the mental health attention unit of the hospital. During the validation procedure an expert clinical psychologist carried out diagnostic interviews with a randomized subsample of 50 pregnant women based on established diagnostic instruments for stress and psychopathology, like the PSS-14, ESI, PDS, HSCL-25. Significant correlations between the results obtained through the assessment using the KINDEX and the risk areas of stress, psychopathology and trauma load assessed in the validation interview demonstrate the criterion-related concurrent validity of the KINDEX. The referral accuracy of the medical staff is confirmed through comparisons between pregnant women who have and have not been referred to the mental health attention unit. Prenatal screenings for psychosocial risks like the KINDEX are feasible in public health settings in Greece. In addition, validity was confirmed in high correlations between the KINDEX results and the results of the validation interviews. The KINDEX Greek version can be considered a valid tool, which can be used by non-trained medical staff providing obstetrical care to identify high-risk women and refer them to adequate mental health and social services. These kind of assessments are indispensable for the promotion of a healthy family environment and child development.

  11. Concurrent agreement between an anthropometric model to predict thigh volume and dual-energy X-Ray absorptiometry assessment in female volleyball players aged 14-18 years.

    PubMed

    Tavares, Óscar M; Valente-Dos-Santos, João; Duarte, João P; Póvoas, Susana C; Gobbo, Luís A; Fernandes, Rômulo A; Marinho, Daniel A; Casanova, José M; Sherar, Lauren B; Courteix, Daniel; Coelho-E-Silva, Manuel J

    2016-11-24

    A variety of performance outputs are strongly determined by lower limbs volume and composition in children and adolescents. The current study aimed to examine the validity of thigh volume (TV) estimated by anthropometry in late adolescent female volleyball players. Dual-energy X-ray absorptiometry (DXA) measures were used as the reference method. Total and regional body composition was assessed with a Lunar DPX NT/Pro/MD+/Duo/Bravo scanner in a cross-sectional sample of 42 Portuguese female volleyball players aged 14-18 years (165.2 ± 0.9 cm; 61.1 ± 1.4 kg). TV was estimated with the reference method (TV-DXA) and with the anthropometric method (TV-ANTH). Agreement between procedures was assessed with Deming regression. The analysis also considered a calibration of the anthropometric approach. The equation that best predicted TV-DXA was: -0.899 + 0.876 × log 10 (body mass) + 0.113 × log 10 (TV-ANTH). This new model (NM) was validated using the predicted residual sum of squares (PRESS) method (R 2 PRESS  = 0.838). Correlation between the reference method and the NM was 0.934 (95%CI: 0.880-0.964, S y∙x  = 0.325 L). A new and accurate anthropometric method to estimate TV in adolescent female volleyball players was obtained from the equation of Jones and Pearson alongside with adjustments for body mass.

  12. Measurement of Longitudinal β-Amyloid Change with 18F-Florbetapir PET and Standardized Uptake Value Ratios

    PubMed Central

    Landau, Susan M.; Fero, Allison; Baker, Suzanne L.; Koeppe, Robert; Mintun, Mark; Chen, Kewei; Reiman, Eric M.; Jagust, William J.

    2017-01-01

    The accurate measurement of β-amyloid (Aβ) change using amyloid PET imaging is important for Alzheimer disease research and clinical trials but poses several unique challenges. In particular, reference region measurement instability may lead to spurious changes in cortical regions of interest. To optimize our ability to measure 18F-florbetapir longitudinal change, we evaluated several candidate regions of interest and their influence on cortical florbetapir change over a 2-y period in participants from the Alzheimer Disease Neuroimaging Initiative (ADNI). Methods We examined the agreement in cortical florbetapir change detected using 6 candidate reference regions (cerebellar gray matter, whole cerebellum, brain stem/pons, eroded subcortical white matter [WM], and 2 additional combinations of these regions) in 520 ADNI subjects. We used concurrent cerebrospinal fluid Aβ1–42 measurements to identify subgroups of ADNI subjects expected to remain stable over follow-up (stable Aβ group; n = 14) and subjects expected to increase (increasing Aβ group; n = 91). We then evaluated reference regions according to whether cortical change was minimal in the stable Aβ group and cortical retention increased in the increasing Aβ group. Results There was poor agreement across reference regions in the amount of cortical change observed across all 520 ADNI subjects. Within the stable Aβ group, however, cortical florbetapir change was 1%–2% across all reference regions, indicating high consistency. In the increasing Aβ group, cortical increases were significant with all reference regions. Reference regions containing WM (as opposed to cerebellum or pons) enabled detection of cortical change that was more physiologically plausible and more likely to increase over time. Conclusion Reference region selection has an important influence on the detection of florbetapir change. Compared with cerebellum or pons alone, reference regions that included subcortical WM resulted in change measurements that are more accurate. In addition, because use of WM-containing reference regions involves dividing out cortical signal contained in the reference region (via partial-volume effects), use of these WM-containing regions may result in more conservative estimates of actual change. Future analyses using different tracers, tracer–kinetic models, pipelines, and comparisons with other biomarkers will further optimize our ability to accurately measure Aβ changes over time. PMID:25745095

  13. Kerlinger's Criterial Referents Theory Revisited.

    ERIC Educational Resources Information Center

    Zak, Itai; Birenbaum, Menucha

    1980-01-01

    Kerlinger's criterial referents theory of attitudes was tested cross-culturally by administering an education attitude referents summated-rating scale to 713 individuals in Israel. The response pattern to criterial and noncriterial referents was examined. Results indicated empirical cross-cultural validity of theory, but questioned measuring…

  14. Evaluating the generalizability of GEP models for estimating reference evapotranspiration in distant humid and arid locations

    NASA Astrophysics Data System (ADS)

    Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal

    2017-10-01

    Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.

  15. Finding regional models of the Alzheimer disease by fusing information from neuropsychological tests and structural MR images

    NASA Astrophysics Data System (ADS)

    Giraldo, Diana L.; García-Arteaga, Juan D.; Romero, Eduardo

    2016-03-01

    Initial diagnosis of Alzheimer's disease (AD) is based on the patient's clinical history and a battery of neuropsy-chological tests. This work presents an automatic strategy that uses Structural Magnetic Resonance Imaging (MRI) to learn brain models for different stages of the disease using information from clinical assessments. Then, a comparison of the discriminant power of the models in different anatomical areas is made by using the brain region of the models as a reference frame for the classification problem, by using the projection into the AD model a Receiver Operating Characteristic (ROC) curve is constructed. Validation was performed using a leave- one-out scheme with 86 subjects (20 AD and 60 NC) from the Open Access Series of Imaging Studies (OASIS) database. The region with the best classification performance was the left amygdala where it is possible to achieve a sensibility and specificity of 85% at the same time. The regions with the best performance, in terms of the AUC, are in strong agreement with those described as important for the diagnosis of AD in clinical practice.

  16. A regional ionospheric TEC mapping technique over China and adjacent areas on the basis of data assimilation

    NASA Astrophysics Data System (ADS)

    Aa, Ercha; Huang, Wengeng; Yu, Shimei; Liu, Siqing; Shi, Liqin; Gong, Jiancun; Chen, Yanhong; Shen, Hua

    2015-06-01

    In this paper, a regional total electron content (TEC) mapping technique over China and adjacent areas (70°E-140°E and 15°N-55°N) is developed on the basis of a Kalman filter data assimilation scheme driven by Global Navigation Satellite Systems (GNSS) data from the Crustal Movement Observation Network of China and International GNSS Service. The regional TEC maps can be generated accordingly with the spatial and temporal resolution being 1°×1° and 5 min, respectively. The accuracy and quality of the TEC mapping technique have been validated through the comparison with GNSS observations, the International Reference Ionosphere model values, the global ionosphere maps from Center for Orbit Determination of Europe, and the Massachusetts Institute of Technology Automated Processing of GPS TEC data from Madrigal database. The verification results indicate that great systematic improvements can be obtained when data are assimilated into the background model, which demonstrates the effectiveness of this technique in providing accurate regional specification of the ionospheric TEC over China and adjacent areas.

  17. Evaluation of New Reference Genes in Papaya for Accurate Transcript Normalization under Different Experimental Conditions

    PubMed Central

    Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen

    2012-01-01

    Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID:22952972

  18. Post traumatic brain perfusion SPECT analysis using reconstructed ROI maps of radioactive microsphere derived cerebral blood flow and statistical parametric mapping

    PubMed Central

    McGoron, Anthony J; Capille, Michael; Georgiou, Michael F; Sanchez, Pablo; Solano, Juan; Gonzalez-Brito, Manuel; Kuluz, John W

    2008-01-01

    Background Assessment of cerebral blood flow (CBF) by SPECT could be important in the management of patients with severe traumatic brain injury (TBI) because changes in regional CBF can affect outcome by promoting edema formation and intracranial pressure elevation (with cerebral hyperemia), or by causing secondary ischemic injury including post-traumatic stroke. The purpose of this study was to establish an improved method for evaluating regional CBF changes after TBI in piglets. Methods The focal effects of moderate traumatic brain injury (TBI) on cerebral blood flow (CBF) by SPECT cerebral blood perfusion (CBP) imaging in an animal model were investigated by parallelized statistical techniques. Regional CBF was measured by radioactive microspheres and by SPECT 2 hours after injury in sham-operated piglets versus those receiving severe TBI by fluid-percussion injury to the left parietal lobe. Qualitative SPECT CBP accuracy was assessed against reference radioactive microsphere regional CBF measurements by map reconstruction, registration and smoothing. Cerebral hypoperfusion in the test group was identified at the voxel level using statistical parametric mapping (SPM). Results A significant area of hypoperfusion (P < 0.01) was found as a response to the TBI. Statistical mapping of the reference microsphere CBF data confirms a focal decrease found with SPECT and SPM. Conclusion The suitability of SPM for application to the experimental model and ability to provide insight into CBF changes in response to traumatic injury was validated by the SPECT SPM result of a decrease in CBP at the left parietal region injury area of the test group. Further study and correlation of this characteristic lesion with long-term outcomes and auxiliary diagnostic modalities is critical to developing more effective critical care treatment guidelines and automated medical imaging processing techniques. PMID:18312639

  19. Post traumatic brain perfusion SPECT analysis using reconstructed ROI maps of radioactive microsphere derived cerebral blood flow and statistical parametric mapping.

    PubMed

    McGoron, Anthony J; Capille, Michael; Georgiou, Michael F; Sanchez, Pablo; Solano, Juan; Gonzalez-Brito, Manuel; Kuluz, John W

    2008-02-29

    Assessment of cerebral blood flow (CBF) by SPECT could be important in the management of patients with severe traumatic brain injury (TBI) because changes in regional CBF can affect outcome by promoting edema formation and intracranial pressure elevation (with cerebral hyperemia), or by causing secondary ischemic injury including post-traumatic stroke. The purpose of this study was to establish an improved method for evaluating regional CBF changes after TBI in piglets. The focal effects of moderate traumatic brain injury (TBI) on cerebral blood flow (CBF) by SPECT cerebral blood perfusion (CBP) imaging in an animal model were investigated by parallelized statistical techniques. Regional CBF was measured by radioactive microspheres and by SPECT 2 hours after injury in sham-operated piglets versus those receiving severe TBI by fluid-percussion injury to the left parietal lobe. Qualitative SPECT CBP accuracy was assessed against reference radioactive microsphere regional CBF measurements by map reconstruction, registration and smoothing. Cerebral hypoperfusion in the test group was identified at the voxel level using statistical parametric mapping (SPM). A significant area of hypoperfusion (P < 0.01) was found as a response to the TBI. Statistical mapping of the reference microsphere CBF data confirms a focal decrease found with SPECT and SPM. The suitability of SPM for application to the experimental model and ability to provide insight into CBF changes in response to traumatic injury was validated by the SPECT SPM result of a decrease in CBP at the left parietal region injury area of the test group. Further study and correlation of this characteristic lesion with long-term outcomes and auxiliary diagnostic modalities is critical to developing more effective critical care treatment guidelines and automated medical imaging processing techniques.

  20. Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crull, E W; Brown Jr., C G; Perkins, M P

    2008-07-30

    For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less

  1. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing

    PubMed Central

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-01-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060

  2. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.

  3. Uncertainty Analysis in the Creation of a Fine-Resolution Leaf Area Index (LAI) Reference Map for Validation of Moderate Resolution LAI Products

    EPA Science Inventory

    The validation process for a moderate resolution leaf area index (LAI) product (i.e., MODIS) involves the creation of a high spatial resolution LAI reference map (Lai-RM), which when scaled to the moderate LAI resolution (i.e., >1 km) allows for comparison and analysis with this ...

  4. Resource Conservation and Recovery Act, Part B Permit Application [for the Waste Isolation Pilot Plant (WIPP)]. Volume 5, Chapter D, Appendix D1 (conclusion), Revision 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Neville G.W.; Heuze, Francois E.; Miller, Hamish D.S.

    1993-03-01

    The reference design for the underground facilities at the Waste Isolation Pilot Plant was developed using the best criteria available at initiation of the detailed design effort. These design criteria are contained in the US Department of Energy document titled Design Criteria, Waste Isolation Pilot Plant (WIPP). Revised Mission Concept-IIA (RMC-IIA), Rev. 4, dated February 1984. The validation process described in the Design Validation Final Report has resulted in validation of the reference design of the underground openings based on these criteria. Future changes may necessitate modification of the Design Criteria document and/or the reference design. Validation of the referencemore » design as presented in this report permits the consideration of future design or design criteria modifications necessitated by these changes or by experience gained at the WIPP. Any future modifications to the design criteria and/or the reference design will be governed by a DOE Standard Operation Procedure (SOP) covering underground design changes. This procedure will explain the process to be followed in describing, evaluating and approving the change.« less

  5. A method to improve visual similarity of breast masses for an interactive computer-aided diagnosis environment.

    PubMed

    Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David

    2006-01-01

    The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.

  6. Reparative resynchronization in ischemic heart failure: an emerging strategy.

    PubMed

    Yamada, Satsuki; Terzic, Andre

    2014-08-01

    Cardiac dyssynchrony refers to disparity in cardiac wall motion, a serious consequence of myocardial infarction associated with poor outcome. Infarct-induced scar is refractory to device-based cardiac resynchronization therapy, which relies on viable tissue. Leveraging the prospect of structural and functional regeneration, reparative resynchronization has emerged as a potentially achievable strategy. In proof-of-concept studies, stem-cell therapy eliminates contractile deficit originating from infarcted regions and secures long-term synchronization with tissue repair. Limited clinical experience suggests benefit of cell interventions in acute and chronic ischemic heart disease as adjuvant to standard of care. A regenerative resynchronization option for dyssynchronous heart failure thus merits validation.

  7. Report of the panel on international programs

    NASA Technical Reports Server (NTRS)

    Anderson, Allen Joel; Fuchs, Karl W.; Ganeka, Yasuhiro; Gaur, Vinod; Green, Andrew A.; Siegfried, W.; Lambert, Anthony; Rais, Jacub; Reighber, Christopher; Seeger, Herman

    1991-01-01

    The panel recommends that NASA participate and take an active role in the continuous monitoring of existing regional networks, the realization of high resolution geopotential and topographic missions, the establishment of interconnection of the reference frames as defined by different space techniques, the development and implementation of automation for all ground-to-space observing systems, calibration and validation experiments for measuring techniques and data, the establishment of international space-based networks for real-time transmission of high density space data in standardized formats, tracking and support for non-NASA missions, and the extension of state-of-the art observing and analysis techniques to developing nations.

  8. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    PubMed

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.

  9. Creating an open access cal/val repository via the LACO-Wiki online validation platform

    NASA Astrophysics Data System (ADS)

    Perger, Christoph; See, Linda; Dresel, Christopher; Weichselbaum, Juergen; Fritz, Steffen

    2017-04-01

    There is a major gap in the amount of in-situ data available on land cover and land use, either as field-based ground truth information or from image interpretation, both of which are used for the calibration and validation (cal/val) of products derived from Earth Observation. Although map producers generally publish their confusion matrices and the accuracy measures associated with their land cover and land use products, the cal/val data (also referred to as reference data) are rarely shared in an open manner. Although there have been efforts in compiling existing reference datasets and making them openly available, e.g. through the GOFC/GOLD (Global Observation for Forest Cover and Land Dynamics) portal or the European Commission's Copernicus Reference Data Access (CORDA), this represents a tiny fraction of the reference data collected and stored locally around the world. Moreover, the validation of land cover and land use maps is usually undertaken with tools and procedures specific to a particular institute or organization due to the lack of standardized validation procedures; thus, there are currently no incentives to share the reference data more broadly with the land cover and land use community. In an effort to provide a set of standardized, online validation tools and to build an open repository of cal/val data, the LACO-Wiki online validation portal has been developed, which will be presented in this paper. The portal contains transparent, documented and reproducible validation procedures that can be applied to local as well as global products. LACO-Wiki was developed through a user consultation process that resulted in a 4-step wizard-based workflow, which supports the user from uploading the map product for validation, through to the sampling process and the validation of these samples, until the results are processed and a final report is created that includes a range of commonly reported accuracy measures. One of the design goals of LACO-Wiki has been to simplify the workflows as much as possible so that the tool can be used both professionally and in an educational or non-expert context. By using the tool for validation, the user agrees to share their validation samples and therefore contribute to an open access cal/val repository. Interest in the use of LACO-Wiki for validation of national land cover or related products has already been expressed, e.g. by national stakeholders under the umbrella of the European Environment Agency (EEA), and for global products by GOFC/GOLD and the Group on Earth Observation (GEO). Thus, LACO-Wiki has the potential to become the focal point around which an international land cover validation community could be built, and could significantly advance the state-of-the-art in land cover cal/val, particularly given recent developments in opening up of the Landsat archive and the open availability of Sentinel imagery. The platform will also offer open access to crowdsourced in-situ data, for example, from the recently developed LACO-Wiki mobile smartphone app, which can be used to collect additional validation information in the field, as well as to validation data collected via its partner platform, Geo-Wiki, where an already established community of citizen scientists collect land cover and land use data for different research applications.

  10. Transferring Error Characteristics of Satellite Rainfall Data from Ground Validation (gauged) into Non-ground Validation (ungauged)

    NASA Astrophysics Data System (ADS)

    Tang, L.; Hossain, F.

    2009-12-01

    Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.

  11. Evaluation of three different validation procedures regarding the accuracy of template-guided implant placement: an in vitro study.

    PubMed

    Vasak, Christoph; Strbac, Georg D; Huber, Christian D; Lettner, Stefan; Gahleitner, André; Zechner, Werner

    2015-02-01

    The study aims to evaluate the accuracy of the NobelGuide™ (Medicim/Nobel Biocare, Göteborg, Sweden) concept maximally reducing the influence of clinical and surgical parameters. Moreover, the study was to compare and validate two validation procedures versus a reference method. Overall, 60 implants were placed in 10 artificial edentulous mandibles according to the NobelGuide™ protocol. For merging the pre- and postoperative DICOM data sets, three different fusion methods (Triple Scan Technique, NobelGuide™ Validation software, and AMIRA® software [VSG - Visualization Sciences Group, Burlington, MA, USA] as reference) were applied. Discrepancies between the virtual and the actual implant positions were measured. The mean deviations measured with AMIRA® were 0.49 mm (implant shoulder), 0.69 mm (implant apex), and 1.98°mm (implant axis). The Triple Scan Technique as well as the NobelGuide™ Validation software revealed similar deviations compared with the reference method. A significant correlation between angular and apical deviations was seen (r = 0.53; p < .001). A greater implant diameter was associated with greater deviations (p = .03). The Triple Scan Technique as a system-independent validation procedure as well as the NobelGuide™ Validation software are in accordance with the AMIRA® software. The NobelGuide™ system showed similar or less spatial and angular deviations compared with others. © 2013 Wiley Periodicals, Inc.

  12. Validation of endogenous reference genes for qRT-PCR analysis of human visceral adipose samples

    PubMed Central

    2010-01-01

    Background Given the epidemic proportions of obesity worldwide and the concurrent prevalence of metabolic syndrome, there is an urgent need for better understanding the underlying mechanisms of metabolic syndrome, in particular, the gene expression differences which may participate in obesity, insulin resistance and the associated series of chronic liver conditions. Real-time PCR (qRT-PCR) is the standard method for studying changes in relative gene expression in different tissues and experimental conditions. However, variations in amount of starting material, enzymatic efficiency and presence of inhibitors can lead to quantification errors. Hence the need for accurate data normalization is vital. Among several known strategies for data normalization, the use of reference genes as an internal control is the most common approach. Recent studies have shown that both obesity and presence of insulin resistance influence an expression of commonly used reference genes in omental fat. In this study we validated candidate reference genes suitable for qRT-PCR profiling experiments using visceral adipose samples from obese and lean individuals. Results Cross-validation of expression stability of eight selected reference genes using three popular algorithms, GeNorm, NormFinder and BestKeeper found ACTB and RPII as most stable reference genes. Conclusions We recommend ACTB and RPII as stable reference genes most suitable for gene expression studies of human visceral adipose tissue. The use of these genes as a reference pair may further enhance the robustness of qRT-PCR in this model system. PMID:20492695

  13. Validation of endogenous reference genes for qRT-PCR analysis of human visceral adipose samples.

    PubMed

    Mehta, Rohini; Birerdinc, Aybike; Hossain, Noreen; Afendy, Arian; Chandhoke, Vikas; Younossi, Zobair; Baranova, Ancha

    2010-05-21

    Given the epidemic proportions of obesity worldwide and the concurrent prevalence of metabolic syndrome, there is an urgent need for better understanding the underlying mechanisms of metabolic syndrome, in particular, the gene expression differences which may participate in obesity, insulin resistance and the associated series of chronic liver conditions. Real-time PCR (qRT-PCR) is the standard method for studying changes in relative gene expression in different tissues and experimental conditions. However, variations in amount of starting material, enzymatic efficiency and presence of inhibitors can lead to quantification errors. Hence the need for accurate data normalization is vital. Among several known strategies for data normalization, the use of reference genes as an internal control is the most common approach. Recent studies have shown that both obesity and presence of insulin resistance influence an expression of commonly used reference genes in omental fat. In this study we validated candidate reference genes suitable for qRT-PCR profiling experiments using visceral adipose samples from obese and lean individuals. Cross-validation of expression stability of eight selected reference genes using three popular algorithms, GeNorm, NormFinder and BestKeeper found ACTB and RPII as most stable reference genes. We recommend ACTB and RPII as stable reference genes most suitable for gene expression studies of human visceral adipose tissue. The use of these genes as a reference pair may further enhance the robustness of qRT-PCR in this model system.

  14. Validity for What? The Peril of Overclarifying

    ERIC Educational Resources Information Center

    Murphy, Kevin R.

    2012-01-01

    As Paul Newton so ably demonstrates, the concept of validity is both important and problematic. Over the last several decades, a consensus definition of validity has emerged; the current edition of "Standards for Educational and Psychological Testing" notes, "Validity refers to the degree to which evidence and theory support the interpretations of…

  15. On the Validity of Useless Tests

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    2016-01-01

    A misconception exists that validity may refer only to the "interpretation" of test scores and not to the "uses" of those scores. The development and evolution of validity theory illustrate test score interpretation was a primary focus in the earliest days of modern testing, and that validating interpretations derived from test…

  16. 40 CFR 1065.514 - Cycle-validation criteria for operation over specified duty cycles.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Cycle-validation criteria for... Over Specified Duty Cycles § 1065.514 Cycle-validation criteria for operation over specified duty...-validation criteria. You must compare the original reference duty cycle points generated as described in...

  17. 40 CFR 1065.514 - Cycle-validation criteria for operation over specified duty cycles.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Cycle-validation criteria for... Over Specified Duty Cycles § 1065.514 Cycle-validation criteria for operation over specified duty...-validation criteria. You must compare the original reference duty cycle points generated as described in...

  18. 40 CFR 1065.514 - Cycle-validation criteria for operation over specified duty cycles.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Cycle-validation criteria for... Over Specified Duty Cycles § 1065.514 Cycle-validation criteria for operation over specified duty...-validation criteria. You must compare the original reference duty cycle points generated as described in...

  19. 40 CFR 1065.514 - Cycle-validation criteria for operation over specified duty cycles.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Cycle-validation criteria for... Over Specified Duty Cycles § 1065.514 Cycle-validation criteria for operation over specified duty...-validation criteria. You must compare the original reference duty cycle points generated as described in...

  20. Validity Issues in Clinical Assessment.

    ERIC Educational Resources Information Center

    Foster, Sharon L.; Cone, John D.

    1995-01-01

    Validation issues that arise with measures of constructs and behavior are addressed with reference to general reasons for using assessment procedures in clinical psychology. A distinction is made between the representational phase of validity assessment and the elaborative validity phase in which the meaning and utility of scores are examined.…

  1. Selection and Validation of Appropriate Reference Genes for qRT-PCR Analysis in Isatis indigotica Fort.

    PubMed Central

    Li, Tao; Wang, Jing; Lu, Miao; Zhang, Tianyi; Qu, Xinyun; Wang, Zhezhi

    2017-01-01

    Due to its sensitivity and specificity, real-time quantitative PCR (qRT-PCR) is a popular technique for investigating gene expression levels in plants. Based on the Minimum Information for Publication of Real-Time Quantitative PCR Experiments (MIQE) guidelines, it is necessary to select and validate putative appropriate reference genes for qRT-PCR normalization. In the current study, three algorithms, geNorm, NormFinder, and BestKeeper, were applied to assess the expression stability of 10 candidate reference genes across five different tissues and three different abiotic stresses in Isatis indigotica Fort. Additionally, the IiYUC6 gene associated with IAA biosynthesis was applied to validate the candidate reference genes. The analysis results of the geNorm, NormFinder, and BestKeeper algorithms indicated certain differences for the different sample sets and different experiment conditions. Considering all of the algorithms, PP2A-4 and TUB4 were recommended as the most stable reference genes for total and different tissue samples, respectively. Moreover, RPL15 and PP2A-4 were considered to be the most suitable reference genes for abiotic stress treatments. The obtained experimental results might contribute to improved accuracy and credibility for the expression levels of target genes by qRT-PCR normalization in I. indigotica. PMID:28702046

  2. Mechanical function of the heart and its alteration during myocardial ischemia and infarction. Specific reference to coronary atherosclerosis.

    PubMed

    Swan, H J

    1979-12-01

    Altered regional mechanical myocardial performance is an early, sensitive marker of myocardial ischemia, and can be estimated in man with reasonable accuracy. Identification, localization and quantification of abnormalities in mechanical performance can be used to predict the presence of coronary artery disease. Testing techniques that have little or no effect on diagnostic efficiency must be replaced with more sensitive indicators of ischemia. If experimental data are validated by findings in human subjects, accurate identification of regional wall motion changes during test conditions should prove to be a powerful marker of ischemia. To be of value, a diagnostic test must strongly increase the frequency of identification of subjects with a high probabilty for the presence of coronary artery disease in an otherwise low-prevalence population, and of those with known disease who are at the highest risk for complications including myocardial infarction or death.

  3. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  4. Optimal Reference Gene Selection for Expression Studies in Human Reticulocytes.

    PubMed

    Aggarwal, Anu; Jamwal, Manu; Viswanathan, Ganesh K; Sharma, Prashant; Sachdeva, ManUpdesh S; Bansal, Deepak; Malhotra, Pankaj; Das, Reena

    2018-05-01

    Reference genes are indispensable for normalizing mRNA levels across samples in real-time quantitative PCR. Their expression levels vary under different experimental conditions and because of several inherent characteristics. Appropriate reference gene selection is thus critical for gene-expression studies. This study aimed at selecting optimal reference genes for gene-expression analysis of reticulocytes and at validating them in hereditary spherocytosis (HS) and β-thalassemia intermedia (βTI) patients. Seven reference genes (PGK1, MPP1, HPRT1, ACTB, GAPDH, RN18S1, and SDHA) were selected because of published reports. Real-time quantitative PCR was performed on reticulocytes in 20 healthy volunteers, 15 HS patients, and 10 βTI patients. Threshold cycle values were compared with fold-change method and RefFinder software. The stable reference genes recommended by RefFinder were validated with SLC4A1 and flow cytometric eosin-5'-maleimide binding assay values in HS patients and HBG2 and high performance liquid chromatography-derived percentage of hemoglobin F in βTI. Comprehensive ranking predicted MPP1 and GAPDH as optimal reference genes for reticulocytes that were not affected in HS and βTI. This was further confirmed on validation with eosin-5'-maleimide results and percentage of hemoglobin F in HS and βTI patients, respectively. Hence, MPP1 and GAPDH are good reference genes for reticulocyte expression studies compared with ACTB and RN18S1, the two most commonly used reference genes. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  5. Development and validation of a cerebral oximeter capable of absolute accuracy.

    PubMed

    MacLeod, David B; Ikeda, Keita; Vacchiano, Charles; Lobbestael, Aaron; Wahr, Joyce A; Shaw, Andrew D

    2012-12-01

    Cerebral oximetry may be a valuable monitor, but few validation data are available, and most report the change from baseline rather than absolute accuracy, which may be affected by individuals whose oximetric values are outside the expected range. The authors sought to develop and validate a cerebral oximeter capable of absolute accuracy. An in vivo research study. A university human physiology laboratory. Healthy human volunteers were enrolled in calibration and validation studies of 2 cerebral oximetric sensors, the Nonin 8000CA and 8004CA. The 8000CA validation study identified 5 individuals with atypical cerebral oxygenation values; their data were used to design the 8004CA sensor, which subsequently underwent calibration and validation. Volunteers were taken through a stepwise hypoxia protocol to a minimum saturation of peripheral oxygen. Arteriovenous saturation (70% jugular bulb venous saturation and 30% arterial saturation) at 6 hypoxic plateaus was used as the reference value for the cerebral oximeter. Absolute accuracy was defined using a combination of the bias and precision of the paired saturations (A(RMS)). In the validation study for the 8000CA sensor (n = 9, 106 plateaus), relative accuracy was an A(RMS) of 2.7, with an absolute accuracy of 8.1, meeting the criteria for a relative (trend) monitor, but not an absolute monitor. In the validation study for the 8004CA sensor (n = 11, 119 plateaus), the A(RMS) of the 8004CA was 4.1, meeting the prespecified success criterion of <5.0. The Nonin cerebral oximeter using the 8004CA sensor can provide absolute data on regional cerebral saturation compared with arteriovenous saturation, even in subjects previously shown to have values outside the normal population distribution curves. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Cross-cultural adaptation and validation of the Dutch version of the core outcome measures index for low back pain.

    PubMed

    Van Lerbeirghe, J; Van Lerbeirghe, J; Van Schaeybroeck, P; Robijn, H; Rasschaert, R; Sys, J; Parlevliet, T; Hallaert, G; Van Wambeke, P; Depreitere, B

    2018-01-01

    The core outcome measures index (COMI) is a validated multidimensional instrument for assessing patient-reported outcome in patients with back problems. The aim of the present study is to translate the COMI into Dutch and validate it for use in native Dutch speakers with low back pain. The COMI was translated into Dutch following established guidelines and avoiding region-specific terminology. A total of 89 Dutch-speaking patients with low back pain were recruited from 8 centers, located in the Dutch-speaking part of Belgium. Patients completed a questionnaire booklet including the validated Dutch version of the Roland Morris disability questionnaire, EQ-5D, the WHOQoL-Bref, the Numeric Rating Scale (NRS) for pain, and the Dutch translation of the COMI. Two weeks later, patients completed the Dutch COMI translation again, with a transition scale assessing changes in their condition. The patterns of correlations between the individual COMI items and the validated reference questionnaires were comparable to those reported for other validated language versions of the COMI. The intraclass correlation for the COMI summary score was 0.90 (95% CI 0.84-0.94). It was 0.75 and 0.70 for the back and leg pain score, respectively. The minimum detectable change for the COMI summary score was 1.74. No significant differences were observed between repeated scores of individual COMI items or for the summary score. The reproducibility of the Dutch translation of the COMI is comparable to that of other validated spine outcome measures. The COMI items correlate well with the established item-specific scores. The Dutch translation of the COMI, validated by this work, is a reliable and valuable tool for spine centers treating Dutch-speaking patients and can be used in registries and outcome studies.

  7. Validation of Suitable Reference Genes for Expression Normalization in Echinococcus spp. Larval Stages

    PubMed Central

    Espínola, Sergio Martin; Ferreira, Henrique Bunselmeyer; Zaha, Arnaldo

    2014-01-01

    In recent years, a significant amount of sequence data (both genomic and transcriptomic) for Echinococcus spp. has been published, thereby facilitating the analysis of genes expressed during a specific stage or involved in parasite development. To perform a suitable gene expression quantification analysis, the use of validated reference genes is strongly recommended. Thus, the aim of this work was to identify suitable reference genes to allow reliable expression normalization for genes of interest in Echinococcus granulosus sensu stricto (s.s.) (G1) and Echinococcus ortleppi upon induction of the early pre-adult development. Untreated protoscoleces (PS) and pepsin-treated protoscoleces (PSP) from E. granulosus s.s. (G1) and E. ortleppi metacestode were used. The gene expression stability of eleven candidate reference genes (βTUB, NDUFV2, RPL13, TBP, CYP-1, RPII, EF-1α, βACT-1, GAPDH, ETIF4A-III and MAPK3) was assessed using geNorm, Normfinder, and RefFinder. Our qPCR data showed a good correlation with the recently published RNA-seq data. Regarding expression stability, EF-1α and TBP were the most stable genes for both species. Interestingly, βACT-1 (the most commonly used reference gene), and GAPDH and ETIF4A-III (previously identified as housekeeping genes) did not behave stably in our assay conditions. We propose the use of EF-1α as a reference gene for studies involving gene expression analysis in both PS and PSP experimental conditions for E. granulosus s.s. and E. ortleppi. To demonstrate its applicability, EF-1α was used as a normalizer gene in the relative quantification of transcripts from genes coding for antigen B subunits. The same EF-1α reference gene may be used in studies with other Echinococcus sensu lato species. This report validates suitable reference genes for species of class Cestoda, phylum Platyhelminthes, thus providing a foundation for further validation in other epidemiologically important cestode species, such as those from the Taenia genus. PMID:25014071

  8. Validation of suitable reference genes for expression normalization in Echinococcus spp. larval stages.

    PubMed

    Espínola, Sergio Martin; Ferreira, Henrique Bunselmeyer; Zaha, Arnaldo

    2014-01-01

    In recent years, a significant amount of sequence data (both genomic and transcriptomic) for Echinococcus spp. has been published, thereby facilitating the analysis of genes expressed during a specific stage or involved in parasite development. To perform a suitable gene expression quantification analysis, the use of validated reference genes is strongly recommended. Thus, the aim of this work was to identify suitable reference genes to allow reliable expression normalization for genes of interest in Echinococcus granulosus sensu stricto (s.s.) (G1) and Echinococcus ortleppi upon induction of the early pre-adult development. Untreated protoscoleces (PS) and pepsin-treated protoscoleces (PSP) from E. granulosus s.s. (G1) and E. ortleppi metacestode were used. The gene expression stability of eleven candidate reference genes (βTUB, NDUFV2, RPL13, TBP, CYP-1, RPII, EF-1α, βACT-1, GAPDH, ETIF4A-III and MAPK3) was assessed using geNorm, Normfinder, and RefFinder. Our qPCR data showed a good correlation with the recently published RNA-seq data. Regarding expression stability, EF-1α and TBP were the most stable genes for both species. Interestingly, βACT-1 (the most commonly used reference gene), and GAPDH and ETIF4A-III (previously identified as housekeeping genes) did not behave stably in our assay conditions. We propose the use of EF-1α as a reference gene for studies involving gene expression analysis in both PS and PSP experimental conditions for E. granulosus s.s. and E. ortleppi. To demonstrate its applicability, EF-1α was used as a normalizer gene in the relative quantification of transcripts from genes coding for antigen B subunits. The same EF-1α reference gene may be used in studies with other Echinococcus sensu lato species. This report validates suitable reference genes for species of class Cestoda, phylum Platyhelminthes, thus providing a foundation for further validation in other epidemiologically important cestode species, such as those from the Taenia genus.

  9. Simple and rapid quantification of serotonin transporter binding using [11C]DASB bolus plus constant infusion.

    PubMed

    Gryglewski, G; Rischka, L; Philippe, C; Hahn, A; James, G M; Klebermass, E; Hienert, M; Silberbauer, L; Vanicek, T; Kautzky, A; Berroterán-Infante, N; Nics, L; Traub-Weidinger, T; Mitterhauser, M; Wadsak, W; Hacker, M; Kasper, S; Lanzenberger, R

    2017-04-01

    In-vivo quantification of serotonin transporters (SERT) in human brain has been a mainstay of molecular imaging in the field of neuropsychiatric disorders and helped to explore the underpinnings of several medical conditions, therapeutic and environmental influences. The emergence of PET/MR hybrid systems and the heterogeneity of SERT binding call for the development of efficient methods making the investigation of larger or vulnerable populations with limited scanner time and simultaneous changes in molecular and functional measures possible. We propose [ 11 C]DASB bolus plus constant infusion for these applications and validate it against standard analyses of dynamic PET data. [ 11 C]DASB bolus/infusion optimization was performed on data acquired after [ 11 C]DASB bolus in 8 healthy subjects. Subsequently, 16 subjects underwent one scan using [ 11 C]DASB bolus plus constant infusion with K bol 160-179min and one scan after [ 11 C]DASB bolus for inter-method reliability analysis. Arterial blood sampling and metabolite analysis were performed for all scans. Distribution volumes (V T ) were obtained using Logan plots for bolus scans and ratios between tissue and plasma parent activity for bolus plus infusion scans for different time spans of the scan (V T-70 for 60-70min after start of tracer infusion, V T-90 for 75-90min, V T-120 for 100-120min) in 9 subjects. Omitting blood data, binding potentials (BP ND ) obtained using multilinear reference tissue modeling (MRTM2) and cerebellar gray matter as reference region were compared in 11 subjects. A K bol of 160min was observed to be optimal for rapid equilibration in thalamus and striatum. V T-70 showed good intraclass correlation coefficients (ICCs) of 0.61-0.70 for thalamus, striatal regions and olfactory cortex with bias ≤5.1% compared to bolus scans. ICCs increased to 0.72-0.78 for V T-90 and 0.77-0.93 for V T-120 in these regions. BP ND-90 had negligible bias ≤2.5%, low variability ≤7.9% and ICCs of 0.74-0.87; BP ND-120 had ICCs of 0.73-0.90. Low-binding cortical regions and cerebellar gray matter showed a positive bias of ~8% and ICCs 0.57-0.68 at V T-90 . Cortical BP ND suffered from high variability and bias, best results were obtained for olfactory cortex and anterior cingulate cortex with ICC=0.74-0.75 for BP ND-90 . High-density regions amygdala and midbrain had a negative bias of -5.5% and -22.5% at V T-90 with ICC 0.70 and 0.63, respectively. We have optimized the equilibrium method with [ 11 C]DASB bolus plus constant infusion and demonstrated good inter-method reliability with accepted standard methods and for SERT quantification using both V T and BP ND in a range of different brain regions. With as little as 10-15min of scanning valid estimates of SERT V T and BP ND in thalamus, amygdala, striatal and high-binding cortical regions could be obtained. Blood sampling seems vital for valid quantification of SERT in low-binding cortical regions. These methods allow the investigation of up to three subjects with a single radiosynthesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The feasibility of universal DLP-to-risk conversion coefficients for body CT protocols

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Segars, W. Paul; Paulson, Erik K.; Frush, Donald P.

    2011-03-01

    The effective dose associated with computed tomography (CT) examinations is often estimated from dose-length product (DLP) using scanner-independent conversion coefficients. Such conversion coefficients are available for a small number of examinations, each covering an entire region of the body (e.g., head, neck, chest, abdomen and/or pelvis). Similar conversion coefficients, however, do not exist for examinations that cover a single organ or a sub-region of the body, as in the case of a multi-phase liver examination. In this study, we extended the DLP-to-effective dose conversion coefficient (k factor) to a wide range of body CT protocols and derived the corresponding DLP-to-cancer risk conversion coefficient (q factor). An extended cardiactorso (XCAT) computational model was used, which represented a reference adult male patient. A range of body CT protocols used in clinical practice were categorized based on anatomical regions examined into 10 protocol classes. A validated Monte Carlo program was used to estimate the organ dose associated with each protocol class. Assuming the reference model to be 20 years old, effective dose and risk index (an index of the total risk for cancer incidence) were then calculated and normalized by DLP to obtain the k and q factors. The k and q factors varied across protocol classes; the coefficients of variation were 28% and 9%, respectively. The small variation exhibited by the q factor suggested the feasibility of universal q factors for a wide range of body CT protocols.

  11. Fecal electrolyte testing for evaluation of unexplained diarrhea: Validation of body fluid test accuracy in the absence of a reference method.

    PubMed

    Voskoboev, Nikolay V; Cambern, Sarah J; Hanley, Matthew M; Giesen, Callen D; Schilling, Jason J; Jannetto, Paul J; Lieske, John C; Block, Darci R

    2015-11-01

    Validation of tests performed on body fluids other than blood or urine can be challenging due to the lack of a reference method to confirm accuracy. The aim of this study was to evaluate alternate assessments of accuracy that laboratories can rely on to validate body fluid tests in the absence of a reference method using the example of sodium (Na(+)), potassium (K(+)), and magnesium (Mg(2+)) testing in stool fluid. Validations of fecal Na(+), K(+), and Mg(2+) were performed on the Roche cobas 6000 c501 (Roche Diagnostics) using residual stool specimens submitted for clinical testing. Spiked recovery, mixing studies, and serial dilutions were performed and % recovery of each analyte was calculated to assess accuracy. Results were confirmed by comparison to a reference method (ICP-OES, PerkinElmer). Mean recoveries for fecal electrolytes were Na(+) upon spiking=92%, mixing=104%, and dilution=105%; K(+) upon spiking=94%, mixing=96%, and dilution=100%; and Mg(2+) upon spiking=93%, mixing=98%, and dilution=100%. When autoanalyzer results were compared to reference ICP-OES results, Na(+) had a slope=0.94, intercept=4.1, and R(2)=0.99; K(+) had a slope=0.99, intercept=0.7, and R(2)=0.99; and Mg(2+) had a slope=0.91, intercept=-4.6, and R(2)=0.91. Calculated osmotic gap using both methods were highly correlated with slope=0.95, intercept=4.5, and R(2)=0.97. Acid pretreatment increased magnesium recovery from a subset of clinical specimens. A combination of mixing, spiking, and dilution recovery experiments are an acceptable surrogate for assessing accuracy in body fluid validations in the absence of a reference method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  12. Reference Gene Validation for RT-qPCR, a Note on Different Available Software Packages

    PubMed Central

    De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia

    2015-01-01

    Background An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. Results 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. Conclusions This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use. PMID:25825906

  13. Reference gene validation for RT-qPCR, a note on different available software packages.

    PubMed

    De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia

    2015-01-01

    An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use.

  14. Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data

    NASA Astrophysics Data System (ADS)

    Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.

    2017-08-01

    A high-resolution gridded daily precipitation data set was combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds which lead to landsliding in Switzerland. We colocated triggering rainfall to landslides, developed distributions of triggering and nontriggering rainfall event properties, and determined rainfall thresholds and intensity-duration ID curves and validated their performance. The best predictive performance was obtained by the intensity-duration ID threshold curve, followed by peak daily intensity Imax and mean event intensity Imean. Event duration by itself had very low predictive power. A single country-wide threshold of Imax = 28 mm/d was extended into space by regionalization based on surface erodibility and local climate (mean daily precipitation). It was found that wetter local climate and lower erodibility led to significantly higher rainfall thresholds required to trigger landslides. However, we showed that the improvement in model performance due to regionalization was marginal and much lower than what can be achieved by having a high-quality landslide database. Reference cases in which the landslide locations and timing were randomized and the landslide sample size was reduced showed the sensitivity of the Imax rainfall threshold model. Jack-knife and cross-validation experiments demonstrated that the model was robust. The results reported here highlight the potential of using rainfall ID threshold curves and rainfall threshold values for predicting the occurrence of landslides on a country or regional scale with possible applications in landslide warning systems, even with daily data.

  15. Validated age-specific reference values for CSF total protein levels in children.

    PubMed

    Kahlmann, V; Roodbol, J; van Leeuwen, N; Ramakers, C R B; van Pelt, D; Neuteboom, R F; Catsman-Berrevoets, C E; de Wit, M C Y; Jacobs, B C

    2017-07-01

    To define age-specific reference values for cerebrospinal fluid (CSF) total protein levels for children and validate these values in children with Guillain-Barré syndrome (GBS), acute disseminated encephalomyelitis (ADEM) and multiple sclerosis (MS). Reference values for CSF total protein levels were determined in an extensive cohort of diagnostic samples from children (<18 year) evaluated at Erasmus Medical Center/Sophia Children's Hospital. These reference values were confirmed in children diagnosed with disorders unrelated to raised CSF total protein level and validated in children with GBS, ADEM and MS. The test results of 6145 diagnostic CSF samples from 3623 children were used to define reference values. The reference values based on the upper limit of the 95% CI (i.e. upper limit of normal) were for 6 months-2 years 0.25 g/L, 2-6 years 0.25 g/L, 6-12 years 0.28 g/L, 12-18 years 0.34 g/L. These reference values were confirmed in a subgroup of 378 children diagnosed with disorders that are not typically associated with increased CSF total protein. In addition, the CSF total protein levels in these children in the first 6 months after birth were highly variable (median 0.47 g/L, IQR 0.26-0.65). According to these new reference values, CSF total protein level was elevated in 85% of children with GBS, 66% with ADEM and 23% with MS. More accurate age-specific reference values for CSF total protein levels in children were determined. These new reference values are more sensitive than currently used values for diagnosing GBS and ADEM in children. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  16. Identification and Validation of Reference Genes for RT-qPCR Analysis in Non-Heading Chinese Cabbage Flowers

    PubMed Central

    Wang, Cheng; Cui, Hong-Mi; Huang, Tian-Hong; Liu, Tong-Kun; Hou, Xi-Lin; Li, Ying

    2016-01-01

    Non-heading Chinese cabbage (Brassica rapa ssp. chinensis Makino) is an important vegetable member of Brassica rapa crops. It exhibits a typical sporophytic self-incompatibility (SI) system and is an ideal model plant to explore the mechanism of SI. Gene expression research are frequently used to unravel the complex genetic mechanism and in such studies appropriate reference selection is vital. Validation of reference genes have neither been conducted in Brassica rapa flowers nor in SI trait. In this study, 13 candidate reference genes were selected and examined systematically in 96 non-heading Chinese cabbage flower samples that represent four strategic groups in compatible and self-incompatible lines of non-heading Chinese cabbage. Two RT-qPCR analysis software, geNorm and NormFinder, were used to evaluate the expression stability of these genes systematically. Results revealed that best-ranked references genes should be selected according to specific sample subsets. DNAJ, UKN1, and PP2A were identified as the most stable reference genes among all samples. Moreover, our research further revealed that the widely used reference genes, CYP and ACP, were the least suitable reference genes in most non-heading Chinese cabbage flower sample sets. To further validate the suitability of the reference genes identified in this study, the expression level of SRK and Exo70A1 genes which play important roles in regulating interaction between pollen and stigma were studied. Our study presented the first systematic study of reference gene(s) selection for SI study and provided guidelines to obtain more accurate RT-qPCR results in non-heading Chinese cabbage. PMID:27375663

  17. Regional flow duration curves: Geostatistical techniques versus multivariate regression

    USGS Publications Warehouse

    Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.

    2016-01-01

    A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.

  18. Identification of Suitable Reference Genes for Gene Expression Normalization in qRT-PCR Analysis in Watermelon

    PubMed Central

    Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong

    2014-01-01

    Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT–PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT–PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT–PCR analyses involving watermelon. PMID:24587403

  19. Identification of suitable reference genes for gene expression normalization in qRT-PCR analysis in watermelon.

    PubMed

    Kong, Qiusheng; Yuan, Jingxian; Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong

    2014-01-01

    Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT-PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT-PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT-PCR analyses involving watermelon.

  20. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  1. Remote Sensing of Lake Ice Phenology in Alaska

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Pavelsky, T.

    2017-12-01

    Lake ice phenology (e.g. ice break-up and freeze-up timing) in Alaska is potentially sensitive to climate change. However, there are few current lake ice records in this region, which hinders the comprehensive understanding of interactions between climate change and lake processes. To provide a lake ice database with over a comparatively long time period (2000 - 2017) and large spatial coverage (4000+ lakes) in Alaska, we have developed an algorithm to detect the timing of lake ice using Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data. This approach generally consists of three major steps. First, we use a cloud mask (MOD09GA) to filter out satellite images with heavy cloud contamination. Second, daily MODIS reflectance values (MOD09GQ) of lake surface are used to extract ice pixels from water pixels. The ice status of lakes can be further identified based on the fraction of ice pixels. Third, to improve the accuracy of ice phenology detection, we execute post-processing quality control to reduce false ice events caused by outliers. We validate the proposed algorithm over six lakes by comparing with Landsat-based reference data. Validation results indicate a high correlation between the MODIS results and reference data, with normalized root mean square error (NRMSE) ranging from 1.7% to 4.6%. The time series of this lake ice product is then examined to analyze the spatial and temporal patterns of lake ice phenology.

  2. Size-dependent validation of MODIS MCD64A1 burned area over six vegetation types in boreal Eurasia: Large underestimation in croplands.

    PubMed

    Zhu, Chunmao; Kobayashi, Hideki; Kanaya, Yugo; Saito, Masahiko

    2017-07-05

    Pollutants emitted from wildfires in boreal Eurasia can be transported to the Arctic, and their subsequent deposition could accelerate global warming. The Moderate Resolution Imaging Spectroradiometer (MODIS) MCD64A1 burned area product is the basis of fire emission products. However, uncertainties due to the "moderate resolution" (500 m) characteristic of the MODIS sensor could be introduced. Here, we present a size-dependent validation of MCD64A1 with reference to higher resolution (better than 30 m) satellite products (Landsat 7 ETM+, RapidEye, WorldView-2, and GeoEye-1) for six ecotypes over 12 regions of boreal Eurasia. We considered the 2012 boreal Eurasia burning season when severe wildfires occurred and when Arctic sea ice extent was historically low. Among the six ecotypes, we found MCD64A1 burned areas comprised only 13% of the reference products in croplands because of inadequate detection of small fires (<100 ha). Our results indicate that over all ecotypes, the actual burned area in boreal Eurasia (15,256 km 2 ) could have been ~16% greater than suggested by MCD64A1 (13,187 km 2 ) when applying the correction factors proposed in this study. This implies the effects of wildfire emissions in boreal Eurasia on Arctic warming could be greater than currently estimated.

  3. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    PubMed

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  4. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking.

    PubMed

    Brown, Alisa; Uneri, Ali; Silva, Tharindu De; Manbachi, Amir; Siewerdsen, Jeffrey H

    2018-04-01

    Dynamic reference frames (DRFs) are a common component of modern surgical tracking systems; however, the limited number of commercially available DRFs poses a constraint in developing systems, especially for research and education. This work presents the design and validation of a large, open-source library of DRFs compatible with passive, single-face tracking systems, such as Polaris stereoscopic infrared trackers (NDI, Waterloo, Ontario). An algorithm was developed to create new DRF designs consistent with intra- and intertool design constraints and convert to computer-aided design (CAD) files suitable for three-dimensional printing. A library of 10 such groups, each with 6 to 10 DRFs, was produced and tracking performance was validated in comparison to a standard commercially available reference, including pivot calibration, fiducial registration error (FRE), and target registration error (TRE). Pivot tests showed calibration error [Formula: see text], indistinguishable from the reference. FRE was [Formula: see text], and TRE in a CT head phantom was [Formula: see text], both equivalent to the reference. The library of DRFs offers a useful resource for surgical navigation research and could be extended to other tracking systems and alternative design constraints.

  5. Handwriting assessment of Franco-Quebec primary school-age students

    PubMed

    Couture, Mélanie; Morin, Marie-France; Coallier, Mélissa; Lavigne, Audrey; Archambault, Patricia; Bolduc, Émilie; Chartier, Émilie; Liard, Karolane; Jasmin, Emmanuelle

    2016-12-01

    Reasons for referring school-age children to occupational therapy mainly relate to handwriting problems. However, there are no validated tools or reference values for assessing handwriting in francophone children in Canada. This study aimed to adapt and validate the writing tasks described in an English Canadian handwriting assessment protocol and to develop reference values for handwriting speed for francophone children. Three writing tasks from the Handwriting Assessment Protocol-2nd Edition (near-point and far-point copying and dictation) were adapted for Québec French children and administered to 141 Grade 1 ( n = 73) and Grade 2 ( n = 68) students. Reference values for handwriting speed were obtained for near point and far point copying tasks. This adapted protocol and these reference values for speed will improve occupational therapy handwriting assessments for the target population.

  6. Revealing the missing expressed genes beyond the human reference genome by RNA-Seq.

    PubMed

    Chen, Geng; Li, Ruiyuan; Shi, Leming; Qi, Junyi; Hu, Pengzhan; Luo, Jian; Liu, Mingyao; Shi, Tieliu

    2011-12-02

    The complete and accurate human reference genome is important for functional genomics researches. Therefore, the incomplete reference genome and individual specific sequences have significant effects on various studies. we used two RNA-Seq datasets from human brain tissues and 10 mixed cell lines to investigate the completeness of human reference genome. First, we demonstrated that in previously identified ~5 Mb Asian and ~5 Mb African novel sequences that are absent from the human reference genome of NCBI build 36, ~211 kb and ~201 kb of them could be transcribed, respectively. Our results suggest that many of those transcribed regions are not specific to Asian and African, but also present in Caucasian. Then, we found that the expressions of 104 RefSeq genes that are unalignable to NCBI build 37 in brain and cell lines are higher than 0.1 RPKM. 55 of them are conserved across human, chimpanzee and macaque, suggesting that there are still a significant number of functional human genes absent from the human reference genome. Moreover, we identified hundreds of novel transcript contigs that cannot be aligned to NCBI build 37, RefSeq genes and EST sequences. Some of those novel transcript contigs are also conserved among human, chimpanzee and macaque. By positioning those contigs onto the human genome, we identified several large deletions in the reference genome. Several conserved novel transcript contigs were further validated by RT-PCR. Our findings demonstrate that a significant number of genes are still absent from the incomplete human reference genome, highlighting the importance of further refining the human reference genome and curating those missing genes. Our study also shows the importance of de novo transcriptome assembly. The comparative approach between reference genome and other related human genomes based on the transcriptome provides an alternative way to refine the human reference genome.

  7. In vitro vaccine potency testing: a proposal for reducing animal use for requalification testing.

    PubMed

    Brown, K; Stokes, W

    2012-01-01

    This paper proposes a program under which the use of animals for requalification of in vitro potency tests could be eliminated. Standard References (USDA/CVB nomenclature) would be developed, characterized, stored and monitored by selected reference laboratories worldwide. These laboratories would employ scientists skilled in protein and glycoprotein chemistry and equipped with state-of-the-art instruments for required analyses. After Standard References are established, the reference laboratories would provide them to the animal health industry as "gold standards". Companies would then establish and validate a correlation between the Standard Reference and the company Master Reference (USDA/CVB nomenclature) using an internal in vitro assay. After this correlation is established, the company could use the Standard References for qualifying, monitoring and requalifying company Master References without the use of animals. Such a program would eliminate the need for animals for requalification of Master References and the need for each company to develop and validate a battery of Master Reference Monitoring assays. It would also provide advantages in terms of reduced costs and reduced time for requalification testing. As such it would provide a strong incentive for companies to develop and use in vitro assays for potency testing.

  8. A Comprehensive Linkage Map of the Dog Genome

    PubMed Central

    Wong, Aaron K.; Ruhe, Alison L.; Dumont, Beth L.; Robertson, Kathryn R.; Guerrero, Giovanna; Shull, Sheila M.; Ziegle, Janet S.; Millon, Lee V.; Broman, Karl W.; Payseur, Bret A.; Neff, Mark W.

    2010-01-01

    We have leveraged the reference sequence of a boxer to construct the first complete linkage map for the domestic dog. The new map improves access to the dog's unique biology, from human disease counterparts to fascinating evolutionary adaptations. The map was constructed with ∼3000 microsatellite markers developed from the reference sequence. Familial resources afforded 450 mostly phase-known meioses for map assembly. The genotype data supported a framework map with ∼1500 loci. An additional ∼1500 markers served as map validators, contributing modestly to estimates of recombination rate but supporting the framework content. Data from ∼22,000 SNPs informing on a subset of meioses supported map integrity. The sex-averaged map extended 21 M and revealed marked region- and sex-specific differences in recombination rate. The map will enable empiric coverage estimates and multipoint linkage analysis. Knowledge of the variation in recombination rate will also inform on genomewide patterns of linkage disequilibrium (LD), and thus benefit association, selective sweep, and phylogenetic mapping approaches. The computational and wet-bench strategies can be applied to the reference genome of any nonmodel organism to assemble a de novo linkage map. PMID:19966068

  9. Terrestrial reference standard sites for postlaunch sensor calibration

    USGS Publications Warehouse

    Teillet, P.M.; Chander, G.

    2010-01-01

    In an era when the number of Earth observation satellites is rapidly growing and measurements from satellite sensors are used to address increasingly urgent global issues, often through synergistic and operational combinations of data from multiple sources, it is imperative that scientists and decision-makers are able to rely on the accuracy of Earth observation data products. The characterization and calibration of these sensors, particularly their relative biases, are vital to the success of the developing integrated Global Earth Observation System of Systems (GEOSS) for coordinated and sustained observations of the Earth. This can only reliably be achieved in the postlaunch environment through the careful use of observations by multiple sensor systems over common, well-characterized terrestrial targets (i.e., on or near the Earth's surface). Through greater access to and understanding of these vital reference standard sites and their use, the validity and utility of information gained from Earth remote sensing will continue to improve. This paper provides a brief overview of the use of reference standard sites for postlaunch sensor radiometric calibration from historical, current, and future perspectives. Emphasis is placed on optical sensors operating in the visible, near-infrared, and shortwave infrared spectral regions.

  10. Using the Wisconsin-Ohio Reference Evaluation Program (WOREP) to Improve Training and Reference Services

    ERIC Educational Resources Information Center

    Novotny, Eric; Rimland, Emily

    2007-01-01

    This article discusses a service quality study conducted in the Pennsylvania State University Libraries. The Wisconsin-Ohio Reference Evaluation Program survey was selected as a valid, standardized instrument. We present our results, highlighting the impact on reference training. A second survey a year later demonstrated that focusing on…

  11. Validation of reference genes for RT-qPCR analysis in Herbaspirillum seropedicae.

    PubMed

    Pessoa, Daniella Duarte Villarinho; Vidal, Marcia Soares; Baldani, José Ivo; Simoes-Araujo, Jean Luiz

    2016-08-01

    The RT-qPCR technique needs a validated set of reference genes for ensuring the consistency of the results from the gene expression. Expression stabilities for 9 genes from Herbaspirillum seropedicae, strain HRC54, grown with different carbon sources were calculated using geNorm and NormFinder, and the gene rpoA showed the best stability values. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A novel adaptive scoring system for segmentation validation with multiple reference masks

    NASA Astrophysics Data System (ADS)

    Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto

    2011-03-01

    The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.

  13. Validation and Test-Retest Reliability of New Thermographic Technique Called Thermovision Technique of Dry Needling for Gluteus Minimus Trigger Points in Sciatica Subjects and TrPs-Negative Healthy Volunteers

    PubMed Central

    Rychlik, Michał; Samborski, Włodzimierz

    2015-01-01

    The aim of this study was to assess the validity and test-retest reliability of Thermovision Technique of Dry Needling (TTDN) for the gluteus minimus muscle. TTDN is a new thermography approach used to support trigger points (TrPs) diagnostic criteria by presence of short-term vasomotor reactions occurring in the area where TrPs refer pain. Method. Thirty chronic sciatica patients (n=15 TrP-positive and n=15 TrPs-negative) and 15 healthy volunteers were evaluated by TTDN three times during two consecutive days based on TrPs of the gluteus minimus muscle confirmed additionally by referred pain presence. TTDN employs average temperature (T avr), maximum temperature (T max), low/high isothermal-area, and autonomic referred pain phenomenon (AURP) that reflects vasodilatation/vasoconstriction. Validity and test-retest reliability were assessed concurrently. Results. Two components of TTDN validity and reliability, T avr and AURP, had almost perfect agreement according to κ (e.g., thigh: 0.880 and 0.938; calf: 0.902 and 0.956, resp.). The sensitivity for T avr, T max, AURP, and high isothermal-area was 100% for everyone, but specificity of 100% was for T avr and AURP only. Conclusion. TTDN is a valid and reliable method for T avr and AURP measurement to support TrPs diagnostic criteria for the gluteus minimus muscle when digitally evoked referred pain pattern is present. PMID:26137486

  14. A comparison of the reproductive physiology of largemouth bass, Micropterus salmoides, collected from the Escambia and Blackwater Rivers in Florida.

    PubMed

    Orlando, E F; Denslow, N D; Folmar, L C; Guillette, L J

    1999-03-01

    Largemouth bass (LMB), Micropterus salmoides, were taken from the Escambia River (contaminated site) and the Blackwater River (reference site) near Pensacola, Florida. The Escambia River collection occurred downstream of the effluent from two identified point sources of pollution. These point sources included a coal-fired electric power plant and a chemical company. Conversely, the Blackwater River's headwaters and most of its length flow within a state park. Although there is some development on the lower part of the Blackwater River, fish were collected in the more pristine upper regions. Fish were captured by electroshocking and were maintained in aerated coolers. Physical measurements were obtained, blood was taken, and liver and gonads were removed. LMB plasma was assayed for the concentration of 17ss-estradiol (E2) and testosterone using validated radioimmunoassays. The presence of vitellogenin was determined by gel electrophoresis (SDS-PAGE) and Western blotting using a monoclonal antibody validated for largemouth bass vitellogenin. No differences in plasma concentrations of E2 or testosterone were observed in females from the two sites. Similarly, males exhibited no difference in plasma E2. However, plasma testosterone was lower in the males from the contaminated site, as compared to the reference site. Vitellogenic males occurred only at the contaminated site. Additionally, liver mass was proportionately higher in males from the contaminated site, as compared to males from the reference site. These data suggest that reproductive steroid levels may have been altered by increased hepatic enzyme activity, and the presence of vitellogenic males indicates that an exogenous source of estrogen was present in the Escambia River.

  15. A comparison of the reproductive physiology of largemouth bass, Micropterus salmoides, collected from the Escambia and Blackwater Rivers in Florida.

    PubMed Central

    Orlando, E F; Denslow, N D; Folmar, L C; Guillette, L J

    1999-01-01

    Largemouth bass (LMB), Micropterus salmoides, were taken from the Escambia River (contaminated site) and the Blackwater River (reference site) near Pensacola, Florida. The Escambia River collection occurred downstream of the effluent from two identified point sources of pollution. These point sources included a coal-fired electric power plant and a chemical company. Conversely, the Blackwater River's headwaters and most of its length flow within a state park. Although there is some development on the lower part of the Blackwater River, fish were collected in the more pristine upper regions. Fish were captured by electroshocking and were maintained in aerated coolers. Physical measurements were obtained, blood was taken, and liver and gonads were removed. LMB plasma was assayed for the concentration of 17ss-estradiol (E2) and testosterone using validated radioimmunoassays. The presence of vitellogenin was determined by gel electrophoresis (SDS-PAGE) and Western blotting using a monoclonal antibody validated for largemouth bass vitellogenin. No differences in plasma concentrations of E2 or testosterone were observed in females from the two sites. Similarly, males exhibited no difference in plasma E2. However, plasma testosterone was lower in the males from the contaminated site, as compared to the reference site. Vitellogenic males occurred only at the contaminated site. Additionally, liver mass was proportionately higher in males from the contaminated site, as compared to males from the reference site. These data suggest that reproductive steroid levels may have been altered by increased hepatic enzyme activity, and the presence of vitellogenic males indicates that an exogenous source of estrogen was present in the Escambia River. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 PMID:10064549

  16. Validation of the "early detection Primary Care Checklist" in an Italian community help-seeking sample: The "checklist per la Valutazione dell'Esordio Psicotico".

    PubMed

    Pelizza, Lorenzo; Raballo, Andrea; Semrov, Enrico; Chiri, Luigi Rocco; Azzali, Silvia; Scazza, Ilaria; Garlassi, Sara; Paterlini, Federica; Fontana, Francesca; Favazzo, Rosanna; Pensieri, Luana; Fabiani, Michela; Cioncolini, Leonardo; Pupo, Simona

    2017-07-26

    To establish the concordant validity of the "Checklist per la Valutazione dell'Esordio Psicotico" (CVEP) in an Italian help-seeking population. The CVEP is the Italian adaptation of the "early detection Primary Care Checklist," a 20-item tool specifically designed to assist primary care practitioners in identifying young people in the early stages of psychosis. The checklist was completed by the referring practitioners of 168 young people referred to the "Reggio Emilia At Risk Mental States" Project, an early detection infrastructure developed under the aegis of the Regional Project on Early Detection of Psychosis in the Reggio Emilia Department of Mental Health. The concordant validity of the CVEP was established by comparing screen results with the outcome of the "Comprehensive Assessment of At Risk Mental States" (CAARMS), a gold standard assessment for identifying young people who may be at risk of developing psychosis. The simple checklist as originally conceived had excellent sensitivity (98%), but lower specificity (58%). Using only a CVEP total score of 20 or above as cut-off, the tool showed a slightly lower sensitivity (93%) with a substantial improvement in specificity (87%). Simple cross-tabulations of the individual CVEP item scores against CAARMS outcome to identify the more discriminant item in terms of sensitivity and specificity were carried out. In comparison to other, much longer, screening tools, the CVEP performed well to identify young people in the early stages of psychosis. Therefore, the CVEP is well suited to optimize appropriate referrals to specialist services, building on the skills and knowledge already available in primary care settings. © 2017 John Wiley & Sons Australia, Ltd.

  17. Investigating Atmospheric Rivers using GPS PW from Ocean Transits

    NASA Astrophysics Data System (ADS)

    Almanza, V.; Foster, J. H.; Businger, S.

    2014-12-01

    Atmospheric Rivers (AR) can be described as a long narrow feature within a warm conveyor belt where anomalous precipitable water (PW) is transported from low to high latitudes. Close monitoring of ARs is heavily reliant on satellites, which are limited both in space and time, to capture the fluctuations PW particularly over the ocean. Ship-based Global Positioning System (GPS) receivers have been successful in obtaining millimeter PW accuracy within 100 km from the nearest ground-based reference receiver at a 30 second sampling rate. We extended this capability with a field experiment using ship-based GPS PW on board a cargo ship to traverse over the Eastern Pacific Ocean. In one 14-day cruise cycle, between the periods of February 3-16, 2014, the ship-based GPS captured PW spikes >50 mm during the early development of two ARs, which lead to moderate to heavy rainfall events for Hawaii and flood conditions along the West Coast of the United States. Comparisons between PW solutions processed using different GPS reference sites at distances 100-2000 km provided an internal validation for the ship-based GPS PW with errors typically less than 5 mm. Land-based observations provided an external validation and are in good agreement with ship-based GPS PW at distances <100 km from the coast, a zone heavily trafficked by cargo containers and a challenge area for satellite retrievals. From these preliminary results, commercial ship-based GPS receivers offer an extremely cost-effective approach for acquiring continuous meteorological observations over the oceans, which can provide important calibration/validation data for satellite retrieval algorithms. Ship-based systems could be particularly useful for augmenting our meteorological observing networks to improve weather prediction and nowcasting, which in turn provide critical support for hazard response and mitigation efforts in coastal regions.

  18. Assessment of a recombinant androgen receptor binding assay: initial steps towards validation.

    PubMed

    Freyberger, Alexius; Weimer, Marc; Tran, Hoai-Son; Ahr, Hans-Jürgen

    2010-08-01

    Despite more than a decade of research in the field of endocrine active compounds with affinity for the androgen receptor (AR), still no validated recombinant AR binding assay is available, although recombinant AR can be obtained from several sources. With funding from the European Union (EU)-sponsored 6th framework project, ReProTect, we developed a model protocol for such an assay based on a simple AR binding assay recently developed at our institution. Important features of the protocol were the use of a rat recombinant fusion protein to thioredoxin containing both the hinge region and ligand binding domain (LBD) of the rat AR (which is identical to the human AR-LBD) and performance in a 96-well plate format. Besides two reference compounds [dihydrotestosterone (DHT), androstenedione] ten test compounds with different affinities for the AR [levonorgestrel, progesterone, prochloraz, 17alpha-methyltestosterone, flutamide, norethynodrel, o,p'-DDT, dibutylphthalate, vinclozolin, linuron] were used to explore the performance of the assay. At least three independent experiments per compound were performed. The AR binding properties of reference and test compounds were well detected, in terms of the relative ranking of binding affinities, there was good agreement with published data obtained from experiments using recombinant AR preparations. Irrespective of the chemical nature of the compound, individual IC(50)-values for a given compound varied by not more than a factor of 2.6. Our data demonstrate that the assay reliably ranked compounds with strong, weak, and no/marginal affinity for the AR with high accuracy. It avoids the manipulation and use of animals, as a recombinant protein is used and thus contributes to the 3R concept. On the whole, this assay is a promising candidate for further validation. Copyright 2009 Elsevier Inc. All rights reserved.

  19. Impact study of the Argo array definition in the Mediterranean Sea based on satellite altimetry gridded data

    NASA Astrophysics Data System (ADS)

    Sanchez-Roman, Antonio; Ruiz, Simón; Pascual, Ananda; Guinehut, Stéphanie; Mourre, Baptiste

    2016-04-01

    The existing Argo network provides essential data in near real time to constrain monitoring and forecasting centers and strongly complements the observations of the ocean surface from space. The comparison of Sea Level Anomalies (SLA) provided by satellite altimeters with in-situ Dynamic Heights Anomalies (DHA) derived from the temperature and salinity profiles of Argo floats contribute to better characterize the error budget associated with the altimeter observations. In this work, performed in the frame of the E-AIMS FP7 European Project, we focus on the Argo observing system in the Mediterranean Sea and its impact on SLA fields provided by satellite altimetry measurements in the basin. Namely, we focus on the sensitivity of specific SLA gridded merged products provided by AVISO in the Mediterranean to the reference depth (400 or 900 dbar) selected in the computation of the Argo Dynamic Height (DH) as an integration of the Argo T/S profiles through the water column. This reference depth will have impact on the number of valid Argo profiles and therefore on their temporal sampling and the coverage by the network used to compare with altimeter data. To compare both datasets, altimeter grids and synthetic climatologies used to compute DHA were spatially and temporally interpolated at the position and time of each in-situ Argo profile by a mapping method based on an optimal interpolation scheme. The analysis was conducted in the entire Mediterranean Sea and different sub-regions of the basin. The second part of this work is devoted to investigate which configuration in terms of spatial sampling of the Argo array in the Mediterranean will properly reproduce the mesoscale dynamics in this basin, which is comprehensively captured by new standards of specific altimeter products for this region. To do that, several Observing System Simulation Experiments (OSSEs) were conducted assuming that altimetry data computed from AVISO specific reanalysis gridded merged product for the Mediterranean as the "true" field. The choice of the reference depth of Argo profiles impacts the number of valid profiles used to compute DHA and therefore the spatial coverage by the network. Results show that the impact of the reference level in the computation of Argo DH is statistically significant since the standard deviation of the differences between DH computed from Altimetry and Argo data referred to reference depth of 400 dbar and 900 dbar are quite different (4.85 and 5.11 cm, respectively). Therefore, 400 dbar should be taken as reference depth to compute DHA from Argo data in the Mediterranean. On the contrary, similar scores are obtained when shallow floats are not included in the computation (4.85 cm against 4.87 cm). In any case, we must highlight that all the studies show significant correlations (95 %) higher than 0.70 between Altimetry and Argo data with a STD for the differences between both datasets of around 4.90 cm. Furthermore, the sub-basin study shows improved statistics for the eastern sub-basin for DHA referred to 400 dbar while minimum values are obtained for the western sub-basin when computing DHA referred to 900 dbar. On the other hand, results about the OSSEs suggest that maintaining an array of Argo floats of 100×100 km, the variance of the large-scale signal and most of the mesoscale features of SLA fields are recovered. Therefore, the network coverage should be enlarged in the Mediterranean in order to achieve at least this spatial resolution.

  20. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population

    PubMed Central

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    Objective To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Methods Selecting from a repository containing members’ data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx–486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. Results A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%–92.0%) versus 73.4% (95% CI: 66.8%–79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%–87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%–80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%–92.6%). Conclusion Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers can study pneumonia with confidence that claims data are a valid tool when studying the safety of COPD therapies that could potentially lead to increased pneumonia susceptibility or severity. PMID:26229461

  1. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population.

    PubMed

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Selecting from a repository containing members' data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx-486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%-92.0%) versus 73.4% (95% CI: 66.8%-79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%-87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%-80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%-92.6%). Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers can study pneumonia with confidence that claims data are a valid tool when studying the safety of COPD therapies that could potentially lead to increased pneumonia susceptibility or severity.

  2. Reference Correlation of the Thermal Conductivity of Carbon Dioxide from the Triple Point to 1100 K and up to 200 MPa

    PubMed Central

    Huber, M. L.; Sykioti, E. A.; Assael, M. J.; Perkins, R. A.

    2016-01-01

    This paper contains new, representative reference equations for the thermal conductivity of carbon dioxide. The equations are based in part upon a body of experimental data that has been critically assessed for internal consistency and for agreement with theory whenever possible. In the case of the dilute-gas thermal conductivity, we incorporated recent theoretical calculations to extend the temperature range of the experimental data. Moreover, in the critical region, the experimentally observed enhancement of the thermal conductivity is well represented by theoretically based equations containing just one adjustable parameter. The correlations are applicable for the temperature range from the triple point to 1100 K and pressures up to 200 MPa. The overall uncertainty (at the 95% confidence level) of the proposed correlation varies depending on the state point from a low of 1% at very low pressures below 0.1 MPa between 300 K and 700 K, to 5% at the higher pressures of the range of validity. PMID:27064300

  3. Generalized Cross Entropy Method for estimating joint distribution from incomplete information

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.

    2016-07-01

    Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.

  4. Investigating the Effects of Variable Water Type for VIIRS Calibration

    NASA Astrophysics Data System (ADS)

    Bowers, J.; Ladner, S.; Martinolich, P.; Arnone, R.; Lawson, A.; Crout, R. L.; Vandermeulen, R. A.

    2016-02-01

    The Naval Research Laboratory - Stennis Space Center (NRL-SSC) currently provides calibration and validation support for the Visible Infrared Imaging Radiometer Suite (VIIRS) satellite ocean color products. NRL-SSC utilizes the NASA Ocean Biology Processing Group (OBPG) methodology for on-orbit vicarious calibration with in situ data collected in blue ocean water by the Marine Optical Buoy (MOBY). An acceptable calibration consists of 20-40 satellite to in situ matchups that establish the radiance correlation at specific points within the operating range of the VIIRS instrument. While the current method improves the VIIRS performance, the MOBY data alone does not represent the full range of radiance values seen in the coastal oceans. However, by utilizing data from the AERONET-OC coastal sites we expand our calibration matchups to cover a more realistic range of continuous values particularly in the green and red spectral regions of the sensor. Improved calibration will provide more accurate data to support daily operations and enable construction of valid climatology for future reference.

  5. Utility of NIST Whole-Genome Reference Materials for the Technical Validation of a Multigene Next-Generation Sequencing Test.

    PubMed

    Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J

    2017-07-01

    The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  6. Transfer of innovation on allergic rhinitis and asthma multimorbidity in the elderly (MACVIA-ARIA) - EIP on AHA Twinning Reference Site (GARD research demonstration project).

    PubMed

    Bousquet, J; Agache, I; Aliberti, M R; Angles, R; Annesi-Maesano, I; Anto, J M; Arnavielhe, S; Asayag, E; Bacci, E; Bedbrook, A; Bachert, C; Baroni, I; Barreto, B A; Bedolla-Barajas, M; Bergmann, K C; Bertorello, L; Bewick, M; Bieber, T; Birov, S; Bindslev-Jensen, C; Blua, A; Bochenska Marciniak, M; Bogus-Buczynska, I; Bosnic-Anticevich, S; Bosse, I; Bourret, R; Bucca, C; Buonaiuto, R; Burguete Cabanas, M T; Caillaud, D; Caimmi, D P; Caiazza, D; Camargos, P; Canfora, G; Cardona, V; Carriazo, A M; Cartier, C; Castellano, G; Chavannes, N H; Cecci, L; Ciaravolo, M M; Cingi, C; Ciceran, A; Colas, L; Colgan, E; Coll, J; Conforti, D; Correia de Sousa, J; Cortés-Grimaldo, R M; Corti, F; Costa, E; Courbis, A L; Cousein, E; Cruz, A A; Custovic, A; Cvetkovski, B; Dario, C; da Silva, J; Dauvilliers, Y; De Blay, F; Dedeu, T; De Feo, G; De Martino, B; Demoly, P; De Vries, G; Di Capua Ercolano, S; Di Carluccio, N; Doulapsi, M; Dray, G; Dubakiene, R; Eller, E; Emuzyte, R; Espinoza-Contreras, J G; Estrada-Cardona, A; Farrell, J; Farsi, A; Ferrero, J; Fokkens, W J; Fonseca, J; Fontaine, J F; Forti, S; Gálvez-Romero, J L; García-Cobas, C I; Garcia Cruz, M H; Gemicioğlu, B; Gerth van Wijk, R; Guidacci, M; Gómez-Vera, J; Guldemond, N A; Gutter, Z; Haahtela, T; Hajjam, J; Hellings, P W; Hernández-Velázquez, L; Illario, M; Ivancevich, J C; Jares, E; Joos, G; Just, J; Kalayci, O; Kalyoncu, A F; Karjalainen, J; Keil, T; Khaltaev, N; Klimek, L; Kritikos, V; Kull, I; Kuna, P; Kvedariene, V; Kolek, V; Krzych-Fałta, E; Kupczyk, M; Lacwik, P; La Grutta, S; Larenas-Linnemann, D; Laune, D; Lauri, D; Lavrut, J; Lessa, M; Levato, G; Lewis, L; Lieten, I; Lipiec, A; Louis, R; Luna-Pech, J A; Magnan, A; Malva, J; Maspero, J F; Matta-Campos, J J; Mayora, O; Medina-Ávalos, M A; Melén, E; Menditto, E; Millot-Keurinck, J; Moda, G; Morais-Almeida, M; Mösges, R; Mota-Pinto, A; Mullol, J; Muraro, A; Murray, R; Noguès, M; Nalin, M; Napoli, L; Neffen, H; O'Hehir, R E; Onorato, G L; Palkonen, S; Papadopoulos, N G; Passalacqua, G; Pépin, J L; Pereira, A M; Persico, M; Pfaar, O; Pozzi, A C; Prokopakis, E; Pugin, B; Raciborski, F; Rimmer, J; Rizzo, J A; Robalo-Cordeiro, C; Rodríguez-González, M; Rolla, G; Roller-Wirnsberger, R E; Romano, A; Romano, M; Romano, M R; Salimäki, J; Samolinski, B; Serpa, F S; Shamai, S; Sierra, M; Sova, M; Sorlini, M; Stellato, C; Stelmach, R; Strandberg, T; Stroetmann, V; Stukas, R; Szylling, A; Tan, R; Tibaldi, V; Todo-Bom, A; Toppila-Salmi, S; Tomazic, P; Trama, U; Triggiani, M; Valero, A; Valovirta, E; Valiulis, A; van Eerd, M; Vasankari, T; Vatrella, A; Ventura, M T; Verissimo, M T; Viart, F; Williams, S; Wagenmann, M; Wanscher, C; Westman, M; Wickman, M; Young, I; Yorgancioglu, A; Zernotti, E; Zuberbier, T; Zurkuhlen, A; De Oliviera, B; Senn, A

    2018-01-01

    The overarching goals of the European Innovation Partnership on Active and Healthy Ageing (EIP on AHA) are to enable European citizens to lead healthy, active and independent lives whilst ageing. The EIP on AHA includes 74 Reference Sites. The aim of this study was to transfer innovation from an app developed by the MACVIA-France EIP on AHA reference site (Allergy Diary) to other reference sites. The phenotypic characteristics of rhinitis and asthma multimorbidity in adults and the elderly will be compared using validated information and communication technology (ICT) tools (i.e. the Allergy Diary and CARAT: Control of Allergic Rhinitis and Asthma Test) in 22 Reference Sites or regions across Europe. This will improve the understanding, assessment of burden, diagnosis and management of rhinitis in the elderly by comparison with an adult population. Specific objectives will be: (i) to assess the percentage of adults and elderly who are able to use the Allergy Diary, (ii) to study the phenotypic characteristics and treatment over a 1-year period of rhinitis and asthma multimorbidity at baseline (cross-sectional study) and (iii) to follow-up using visual analogue scale (VAS). This part of the study may provide some insight into the differences between the elderly and adults in terms of response to treatment and practice. Finally (iv) work productivity will be examined in adults. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  7. Precipitation projections under GCMs perspective and Turkish Water Foundation (TWF) statistical downscaling model procedures

    NASA Astrophysics Data System (ADS)

    Dabanlı, İsmail; Şen, Zekai

    2018-04-01

    The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.

  8. Selection of novel reference genes for use in the human central nervous system: a BrainNet Europe Study.

    PubMed

    Durrenberger, Pascal F; Fernando, Francisca S; Magliozzi, Roberta; Kashefi, Samira N; Bonnert, Timothy P; Ferrer, Isidro; Seilhean, Danielle; Nait-Oumesmar, Brahim; Schmitt, Andrea; Gebicke-Haerter, Peter J; Falkai, Peter; Grünblatt, Edna; Palkovits, Miklos; Parchi, Piero; Capellari, Sabina; Arzberger, Thomas; Kretzschmar, Hans; Roncaroli, Federico; Dexter, David T; Reynolds, Richard

    2012-12-01

    The use of an appropriate reference gene to ensure accurate normalisation is crucial for the correct quantification of gene expression using qPCR assays and RNA arrays. The main criterion for a gene to qualify as a reference gene is a stable expression across various cell types and experimental settings. Several reference genes are commonly in use but more and more evidence reveals variations in their expression due to the presence of on-going neuropathological disease processes, raising doubts concerning their use. We conducted an analysis of genome-wide changes of gene expression in the human central nervous system (CNS) covering several neurological disorders and regions, including the spinal cord, and were able to identify a number of novel stable reference genes. We tested the stability of expression of eight novel (ATP5E, AARS, GAPVD1, CSNK2B, XPNPEP1, OSBP, NAT5 and DCTN2) and four more commonly used (BECN1, GAPDH, QARS and TUBB) reference genes in a smaller cohort using RT-qPCR. The most stable genes out of the 12 reference genes were tested as normaliser to validate increased levels of a target gene in CNS disease. We found that in human post-mortem tissue the novel reference genes, XPNPEP1 and AARS, were efficient in replicating microarray target gene expression levels and that XPNPEP1 was more efficient as a normaliser than BECN1, which has been shown to change in expression as a consequence of neuronal cell loss. We provide herein one more suitable novel reference gene, XPNPEP1, with no current neuroinflammatory or neurodegenerative associations that can be used for gene quantitative gene expression studies with human CNS post-mortem tissue and also suggest a list of potential other candidates. These data also emphasise the importance of organ/tissue-specific stably expressed genes as reference genes for RNA studies.

  9. Validation of reference genes for real-time quantitative PCR normalization in soybean developmental and germinating seeds.

    PubMed

    Li, Qing; Fan, Cheng-Ming; Zhang, Xiao-Mei; Fu, Yong-Fu

    2012-10-01

    Most of traditional reference genes chosen for real-time quantitative PCR normalization were assumed to be ubiquitously and constitutively expressed in vegetative tissues. However, seeds show distinct transcriptomes compared with the vegetative tissues. Therefore, there is a need for re-validation of reference genes in samples of seed development and germination, especially for soybean seeds. In this study, we aimed at identifying reference genes suitable for the quantification of gene expression level in soybean seeds. In order to identify the best reference genes for soybean seeds, 18 putative reference genes were tested with various methods in different seed samples. We combined the outputs of both geNorm and NormFinder to assess the expression stability of these genes. The reference genes identified as optimums for seed development were TUA5 and UKN2, whereas for seed germination they were novel reference genes Glyma05g37470 and Glyma08g28550. Furthermore, for total seed samples it was necessary to combine four genes of Glyma05g37470, Glyma08g28550, Glyma18g04130 and UKN2 [corrected] for normalization. Key message We identified several reference genes that stably expressed in soybean seed developmental and germinating processes.

  10. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  11. Few mitochondrial DNA sequences are inserted into the turkey (Meleagris gallopavo) nuclear genome: evolutionary analyses and informativity in the domestic lineage.

    PubMed

    Schiavo, G; Strillacci, M G; Ribani, A; Bovo, S; Roman-Ponce, S I; Cerolini, S; Bertolini, F; Bagnato, A; Fontanesi, L

    2018-06-01

    Mitochondrial DNA (mtDNA) insertions have been detected in the nuclear genome of many eukaryotes. These sequences are pseudogenes originated by horizontal transfer of mtDNA fragments into the nuclear genome, producing nuclear DNA sequences of mitochondrial origin (numt). In this study we determined the frequency and distribution of mtDNA-originated pseudogenes in the turkey (Meleagris gallopavo) nuclear genome. The turkey reference genome (Turkey_2.01) was aligned with the reference linearized mtDNA sequence using last. A total of 32 numt sequences (corresponding to 18 numt regions derived by unique insertional events) were identified in the turkey nuclear genome (size ranging from 66 to 1415 bp; identity against the modern turkey mtDNA corresponding region ranging from 62% to 100%). Numts were distributed in nine chromosomes and in one scaffold. They derived from parts of 10 mtDNA protein-coding genes, ribosomal genes, the control region and 10 tRNA genes. Seven numt regions reported in the turkey genome were identified in orthologues positions in the Gallus gallus genome and therefore were present in the ancestral genome that in the Cretaceous originated the lineages of the modern crown Galliformes. Five recently integrated turkey numts were validated by PCR in 168 turkeys of six different domestic populations. None of the analysed numts were polymorphic (i.e. absence of the inserted sequence, as reported in numts of recent integration in other species), suggesting that the reticulate speciation model is not useful for explaining the origin of the domesticated turkey lineage. © 2018 Stichting International Foundation for Animal Genetics.

  12. Identification and validation of reference genes for quantification of target gene expression with quantitative real-time PCR for tall fescue under four abiotic stresses.

    PubMed

    Yang, Zhimin; Chen, Yu; Hu, Baoyun; Tan, Zhiqun; Huang, Bingru

    2015-01-01

    Tall fescue (Festuca arundinacea Schreb.) is widely utilized as a major forage and turfgrass species in the temperate regions of the world and is a valuable plant material for studying molecular mechanisms of grass stress tolerance due to its superior drought and heat tolerance among cool-season species. Selection of suitable reference genes for quantification of target gene expression is important for the discovery of molecular mechanisms underlying improved growth traits and stress tolerance. The stability of nine potential reference genes (ACT, TUB, EF1a, GAPDH, SAND, CACS, F-box, PEPKR1 and TIP41) was evaluated using four programs, GeNorm, NormFinder, BestKeeper, and RefFinder. The combinations of SAND and TUB or TIP41 and TUB were most stably expressed in salt-treated roots or leaves. The combinations of GAPDH with TIP41 or TUB were stable in roots and leaves under drought stress. TIP41 and PEPKR1 exhibited stable expression in cold-treated roots, and the combination of F-box, TIP41 and TUB was also stable in cold-treated leaves. CACS and TUB were the two most stable reference genes in heat-stressed roots. TIP41 combined with TUB and ACT was stably expressed in heat-stressed leaves. Finally, quantitative real-time polymerase chain reaction (qRT-PCR) assays of the target gene FaWRKY1 using the identified most stable reference genes confirmed the reliability of selected reference genes. The selection of suitable reference genes in tall fescue will allow for more accurate identification of stress-tolerance genes and molecular mechanisms conferring stress tolerance in this stress-tolerant species.

  13. An ethnopharmacological survey of medicinal plants traditionally used for cancer treatment in the Ashanti region, Ghana.

    PubMed

    Agyare, Christian; Spiegler, Verena; Asase, Alex; Scholz, Michael; Hempel, Georg; Hensel, Andreas

    2018-02-15

    Cancer represents a major health burden and drain on healthcare resources in the world. The majority of the people of Africa still patronize traditional medicine for their health needs, including various forms of cancer. The aim of the following study is the identification of medicinal plants used for cancer treatment by the traditional healers in the Ashanti area of Ghana and to cross-reference the identified plant species with published scientific literature. Validated questionnaires were administered to 85 traditional healers in 10 communities within Ashanti region. For cross-validation, also 7 healers located outside Ashanti region were investigated to evaluate regional differences. Interviews and structured conversations were used to administer the questionnaires. Selected herbal material dominantly used by the healers was collected and identified. The ethnopharmacological survey revealed 151 plant species used for cancer treatment. Identified species were classified into different groups according to their frequency of use, resulting in the "top-22" plants. Interestingly group I (very frequent use) contained 5 plant species (Khaya senegalensis, Triplochiton scleroxylon, Azadirachta indica, Entandrophragma angolense, Terminalia superba), three of which belong to the plant family Meliaceae, phytochemically mainly characterized by the presence of limonoids. Cross-referencing of all plants identified by current scientific literature revealed species which have not been documented for cancer therapy until now. Special interest was laid on use of plants for cancer treatment of children. A variety of traditionally used anti-cancer plants from Ghana have been identified and the widespread use within ethnotraditional medicine is obvious. Further in vitro and clinical studies will be performed in the near future to rationalize the phytochemical and functional scientific background of the respective extracts for cancer treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A de novo transcriptome and valid reference genes for quantitative real-time PCR in Colaphellus bowringi.

    PubMed

    Tan, Qian-Qian; Zhu, Li; Li, Yi; Liu, Wen; Ma, Wei-Hua; Lei, Chao-Liang; Wang, Xiao-Ping

    2015-01-01

    The cabbage beetle Colaphellus bowringi Baly is a serious insect pest of crucifers and undergoes reproductive diapause in soil. An understanding of the molecular mechanisms of diapause regulation, insecticide resistance, and other physiological processes is helpful for developing new management strategies for this beetle. However, the lack of genomic information and valid reference genes limits knowledge on the molecular bases of these physiological processes in this species. Using Illumina sequencing, we obtained more than 57 million sequence reads derived from C. bowringi, which were assembled into 39,390 unique sequences. A Clusters of Orthologous Groups classification was obtained for 9,048 of these sequences, covering 25 categories, and 16,951 were assigned to 255 Kyoto Encyclopedia of Genes and Genomes pathways. Eleven candidate reference gene sequences from the transcriptome were then identified through reverse transcriptase polymerase chain reaction. Among these candidate genes, EF1α, ACT1, and RPL19 proved to be the most stable reference genes for different reverse transcriptase quantitative polymerase chain reaction experiments in C. bowringi. Conversely, aTUB and GAPDH were the least stable reference genes. The abundant putative C. bowringi transcript sequences reported enrich the genomic resources of this beetle. Importantly, the larger number of gene sequences and valid reference genes provide a valuable platform for future gene expression studies, especially with regard to exploring the molecular mechanisms of different physiological processes in this species.

  15. Performance Validation Approach for the GTX Air-Breathing Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Trefny, Charles J.; Roche, Joseph M.

    2002-01-01

    The primary objective of the GTX effort is to determine whether or not air-breathing propulsion can enable a launch vehicle to achieve orbit in a single stage. Structural weight, vehicle aerodynamics, and propulsion performance must be accurately known over the entire flight trajectory in order to make a credible assessment. Structural, aerodynamic, and propulsion parameters are strongly interdependent, which necessitates a system approach to design, evaluation, and optimization of a single-stage-to-orbit concept. The GTX reference vehicle serves this purpose, by allowing design, development, and validation of components and subsystems in a system context. The reference vehicle configuration (including propulsion) was carefully chosen so as to provide high potential for structural and volumetric efficiency, and to allow the high specific impulse of air-breathing propulsion cycles to be exploited. Minor evolution of the configuration has occurred as analytical and experimental results have become available. With this development process comes increasing validation of the weight and performance levels used in system performance determination. This paper presents an overview of the GTX reference vehicle and the approach to its performance validation. Subscale test rigs and numerical studies used to develop and validate component performance levels and unit structural weights are outlined. The sensitivity of the equivalent, effective specific impulse to key propulsion component efficiencies is presented. The role of flight demonstration in development and validation is discussed.

  16. LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.

    PubMed

    Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat

    2009-08-01

    To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.

  17. The Importance of Measurement Errors for Deriving Accurate Reference Leaf Area Index Maps for Validation of Moderate-Resolution Satellite LAI Products

    NASA Technical Reports Server (NTRS)

    Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.

    2006-01-01

    The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.

  18. Glossary of reference terms for alternative test methods and their validation.

    PubMed

    Ferrario, Daniele; Brustio, Roberta; Hartung, Thomas

    2014-01-01

    This glossary was developed to provide technical references to support work in the field of the alternatives to animal testing. It was compiled from various existing reference documents coming from different sources and is meant to be a point of reference on alternatives to animal testing. Giving the ever-increasing number of alternative test methods and approaches being developed over the last decades, a combination, revision, and harmonization of earlier published collections of terms used in the validation of such methods is required. The need to update previous glossary efforts came from the acknowledgement that new words have emerged with the development of new approaches, while others have become obsolete, and the meaning of some terms has partially changed over time. With this glossary we intend to provide guidance on issues related to the validation of new or updated testing methods consistent with current approaches. Moreover, because of new developments and technologies, a glossary needs to be a living, constantly updated document. An Internet-based version based on this compilation may be found at http://altweb.jhsph.edu/, allowing the addition of new material.

  19. Development and validation of RAYDOSE: a Geant4-based application for molecular radiotherapy

    NASA Astrophysics Data System (ADS)

    Marcatili, S.; Pettinato, C.; Daniels, S.; Lewis, G.; Edwards, P.; Fanti, S.; Spezi, E.

    2013-04-01

    We developed and validated a Monte-Carlo-based application (RAYDOSE) to generate patient-specific 3D dose maps on the basis of pre-treatment imaging studies. A CT DICOM image is used to model patient geometry, while repeated PET scans are employed to assess radionuclide kinetics and distribution at the voxel level. In this work, we describe the structure of this application and present the tests performed to validate it against reference data and experiments. We used the spheres of a NEMA phantom to calculate S values and total doses. The comparison with reference data from OLINDA/EXM showed an agreement within 2% for a sphere size above 2.8 cm diameter. A custom heterogeneous phantom composed of several layers of Perspex and lung equivalent material was used to compare TLD measurements of gamma radiation from 131I to Monte Carlo simulations. An agreement within 5% was found. RAYDOSE has been validated against reference data and experimental measurements and can be a useful multi-modality platform for treatment planning and research in MRT.

  20. Initial Alignment for SINS Based on Pseudo-Earth Frame in Polar Regions.

    PubMed

    Gao, Yanbin; Liu, Meng; Li, Guangchun; Guang, Xingxing

    2017-06-16

    An accurate initial alignment must be required for inertial navigation system (INS). The performance of initial alignment directly affects the following navigation accuracy. However, the rapid convergence of meridians and the small horizontalcomponent of rotation of Earth make the traditional alignment methods ineffective in polar regions. In this paper, from the perspective of global inertial navigation, a novel alignment algorithm based on pseudo-Earth frame and backward process is proposed to implement the initial alignment in polar regions. Considering that an accurate coarse alignment of azimuth is difficult to obtain in polar regions, the dynamic error modeling with large azimuth misalignment angle is designed. At the end of alignment phase, the strapdown attitude matrix relative to local geographic frame is obtained without influence of position errors and cumbersome computation. As a result, it would be more convenient to access the following polar navigation system. Then, it is also expected to unify the polar alignment algorithm as much as possible, thereby further unifying the form of external reference information. Finally, semi-physical static simulation and in-motion tests with large azimuth misalignment angle assisted by unscented Kalman filter (UKF) validate the effectiveness of the proposed method.

  1. A Practical Approach to Governance and Optimization of Structured Data Elements.

    PubMed

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  2. Validity Is an Action Verb: Commentary on--"Clarifying the Consensus Definition of Validity"

    ERIC Educational Resources Information Center

    Lissitz, Robert W.; Calico, Tiago

    2012-01-01

    This paper presents the authors' critique on "Clarifying the Consensus Definition of Validity" by Paul E. Newton (this issue). There are serious differences of opinion regarding the topic of validity. Newton is aware of these differences, as made clear by his choice of references and particularly his effort to respond to the various Borsboom…

  3. Validating Test Score Meaning and Defending Test Score Use: Different Aims, Different Methods

    ERIC Educational Resources Information Center

    Cizek, Gregory J.

    2016-01-01

    Advances in validity theory and alacrity in validation practice have suffered because the term "validity" has been used to refer to two incompatible concerns: (1) the degree of support for specified interpretations of test scores (i.e. intended score meaning) and (2) the degree of support for specified applications (i.e. intended test…

  4. Methods to compute reliabilities for genomic predictions of feed intake

    USDA-ARS?s Scientific Manuscript database

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  5. Simulation verification techniques study. Subsystem simulation validation techniques

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1974-01-01

    Techniques for validation of software modules which simulate spacecraft onboard systems are discussed. An overview of the simulation software hierarchy for a shuttle mission simulator is provided. A set of guidelines for the identification of subsystem/module performance parameters and critical performance parameters are presented. Various sources of reference data to serve as standards of performance for simulation validation are identified. Environment, crew station, vehicle configuration, and vehicle dynamics simulation software are briefly discussed from the point of view of their interfaces with subsystem simulation modules. A detailed presentation of results in the area of vehicle subsystems simulation modules is included. A list of references, conclusions and recommendations are also given.

  6. Segmentation and classification of brain images using firefly and hybrid kernel-based support vector machine

    NASA Astrophysics Data System (ADS)

    Selva Bhuvaneswari, K.; Geetha, P.

    2017-05-01

    Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.

  7. Effect of cow reference group on validation reliability of genomic evaluation.

    PubMed

    Koivula, M; Strandén, I; Aamand, G P; Mäntysaari, E A

    2016-06-01

    We studied the effect of including genomic data for cows in the reference population of single-step evaluations. Deregressed individual cow genetic evaluations (DRP) from milk production evaluations of Nordic Red Dairy cattle were used to estimate the single-step breeding values. Validation reliability and bias of the evaluations were calculated with four data sets including different amount of DRP record information from genotyped cows in the reference population. The gain in reliability was from 2% to 4% units for the production traits, depending on the used DRP data and the amount of genomic data. Moreover, inclusion of genotyped bull dams and their genotyped daughters seemed to create some bias in the single-step evaluation. Still, genotyping cows and their inclusion in the reference population is advantageous and should be encouraged.

  8. Applicability and variability of liver stiffness measurements according to probe position

    PubMed Central

    Ingiliz, Patrick; Chhay, Kim Pav; Munteanu, Mona; Lebray, Pascal; Ngo, Yen; Roulot, Dominique; Benhamou, Yves; Thabut, Dominique; Ratziu, Vlad; Poynard, Thierry

    2009-01-01

    AIM: To investigate the liver stiffness measurement (LSM) applicability and variability with reference to three probe positions according to the region of liver biopsy. METHODS: The applicability for LSM was defined as at least 10 valid measurements with a success rate greater than 60% and an interquartile range/median LSM < 30%. The LSM variability compared the inter-position concordance and the concordance with FibroTest. RESULTS: Four hundred and forty two consecutive patients were included. The applicability of the anterior position (81%) was significantly higher than that of the reference (69%) and lower positions (68%), (both P = 0.0001). There was a significant difference (0.5 kPa, 95% CI 0.13-0.89; P < 0.0001) between mean LSM estimated at the reference position (9.3 kPa) vs the anterior position (8.8 kPa). Discordance between positions was associated with thoracic fold (P = 0.008). The discordance rate between the reference position result and FibroTest was higher when the 7.1 kPa cutoff was used to define advanced fibrosis instead of 8.8 kPa (33.6% vs 23.5%, P = 0.03). CONCLUSION: The anterior position of the probe should be the first choice for LSM using Fibroscan, as it has a higher applicability without higher variability compared to the usual liver biopsy position. PMID:19610141

  9. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. EMC MODEL FORECAST VERIFICATION STATS

    Science.gov Websites

    48-H FCST 54-H FCST 60-H FCST 72-H FCST 84-H FCST Loop 500 mb Height BIAS and RMSE CONUS VALID 00Z sub-regions) Surface Wind Vector BIAS and RMSE REGION VALID 00Z VALID 12Z VALID 00Z (loop) VALID 12Z (loop) GMC (Gulf of Mexico Coast) * * * * SEC (Southeast Coast) * * * * NEC (Northeast Coast

  11. Internal Validity: A Must in Research Designs

    ERIC Educational Resources Information Center

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  12. Development and co-validation of porcine insulin certified reference material by high-performance liquid chromatography-isotope dilution mass spectrometry.

    PubMed

    Wu, Liqing; Takatsu, Akiko; Park, Sang-Ryoul; Yang, Bin; Yang, Huaxin; Kinumi, Tomoya; Wang, Jing; Bi, Jiaming; Wang, Yang

    2015-04-01

    This article concerns the development and co-validation of a porcine insulin (pINS) certified reference material (CRM) produced by the National Institute of Metrology, People's Republic of China. Each CRM unit contained about 15 mg of purified solid pINS. The moisture content, amount of ignition residue, molecular mass, and purity of the pINS were measured. Both high-performance liquid chromatography-isotope dilution mass spectrometry and a purity deduction method were used to determine the mass fraction of the pINS. Fifteen units were selected to study the between-bottle homogeneity, and no inhomogeneity was observed. A stability study concluded that the CRM was stable for at least 12 months at -20 °C. The certified value of the CRM was (0.892 ± 0.036) g/g. A co-validation of the CRM was performed among Chinese, Japanese, and Korean laboratories under the framework of the Asian Collaboration on Reference Materials. The co-validation results agreed well with the certified value of the CRM. Consequently, the pINS CRM may be used as a calibration material or as a validation standard for pharmaceutical purposes to improve the quality of pharmaceutical products.

  13. With Reference to Reference Genes: A Systematic Review of Endogenous Controls in Gene Expression Studies.

    PubMed

    Chapman, Joanne R; Waldenström, Jonas

    2015-01-01

    The choice of reference genes that are stably expressed amongst treatment groups is a crucial step in real-time quantitative PCR gene expression studies. Recent guidelines have specified that a minimum of two validated reference genes should be used for normalisation. However, a quantitative review of the literature showed that the average number of reference genes used across all studies was 1.2. Thus, the vast majority of studies continue to use a single gene, with β-actin (ACTB) and/or glyceraldehyde 3-phosphate dehydrogenase (GAPDH) being commonly selected in studies of vertebrate gene expression. Few studies (15%) tested a panel of potential reference genes for stability of expression before using them to normalise data. Amongst studies specifically testing reference gene stability, few found ACTB or GAPDH to be optimal, whereby these genes were significantly less likely to be chosen when larger panels of potential reference genes were screened. Fewer reference genes were tested for stability in non-model organisms, presumably owing to a dearth of available primers in less well characterised species. Furthermore, the experimental conditions under which real-time quantitative PCR analyses were conducted had a large influence on the choice of reference genes, whereby different studies of rat brain tissue showed different reference genes to be the most stable. These results highlight the importance of validating the choice of normalising reference genes before conducting gene expression studies.

  14. Susceptibility Evaluation and Mapping of CHINA'S Landslide Disaster Based on Multi-Temporal Ground and Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Liu, C.; Li, W.; Lu, P.; Sang, K.; Hong, Y.; Li, R.

    2012-07-01

    Under the circumstances of global climate change, nowadays landslide occurs in China more frequently than ever before. The landslide hazard and risk assessment remains an international focus on disaster prevention and mitigation. It is also an important approach for compiling and quantitatively characterizing landslide damages. By integrating empirical models for landslide disasters, and through multi-temporal ground data and remote sensing data, this paper will perform a landslide susceptibility assessment throughout China. A landslide susceptibility (LS) map will then be produced, which can be used for disaster evaluation, and provide basis for analyzing China's major landslide-affected regions. Firstly, based on previous research of landslide susceptibility assessment, this paper collects and analyzes the historical landslide event data (location, quantity and distribution) of past sixty years in China as a reference for late-stage studies. Secondly, this paper will make use of regional GIS data of the whole country provided by the National Geomatics Centre and China Meteorological Administration, including regional precipitation data, and satellite remote sensing data such as from TRMM and MODIS. By referring to historical landslide data of past sixty years, it is possible to develop models for assessing LS, including producing empirical models for prediction, and discovering both static and dynamic key factors, such as topography and landforms (elevation, curvature and slope), geologic conditions (lithology of the strata), soil type, vegetation cover, hydrological conditions (flow distribution). In addition, by analyzing historical data and combining empirical models, it is possible to synthesize a regional statistical model and perform a LS assessment. Finally, based on the 1km×1km grid, the LS map is then produced by ANN learning and multiplying the weighted factor layers. The validation is performed with reference to the frequency and distribution of historical data. This research reveals the spatiotemporal distribution of landslide disasters in China. The study develops a complete algorithm of data collecting, processing, modelling and synthesizing, which fulfils the assessment of landslide susceptibility, and provides theoretical basis for prediction and forecast of landslide disasters throughout China.

  15. USING REGIONAL EXPOSURE CRITERIA AND UPSTREAM REFERENCE DATA TO CHARACTERIZE SPATIAL AND TEMPORAL EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    Analyses of biomarkers in fish were used to evaluate exposures among locations and across time. Two types of references were used for comparison, an upstream reference sample remote from known point sources and regional exposure criteria derived from a baseline of fish from refer...

  16. USING REGIONAL EXPOSURE CRITERIA AND UPSTREAM REFERENCE DATA TO CHARACTERIZE SPATIAL AND TEMPORAL EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    Analyses of biomarkers in fish were used to evaluate exposures among locations and across time. Two types of references were used for comparison, an upstream reference sample remote from known point sources and regional exposure criteria derived from a basline of fish from refere...

  17. Validation of Reference Genes for RT-qPCR Studies of Gene Expression in Preharvest and Postharvest Longan Fruits under Different Experimental Conditions

    PubMed Central

    Wu, Jianyang; Zhang, Hongna; Liu, Liqin; Li, Weicai; Wei, Yongzan; Shi, Shengyou

    2016-01-01

    Reverse transcription quantitative PCR (RT-qPCR) as the accurate and sensitive method is use for gene expression analysis, but the veracity and reliability result depends on whether select appropriate reference gene or not. To date, several reliable reference gene validations have been reported in fruits trees, but none have been done on preharvest and postharvest longan fruits. In this study, 12 candidate reference genes, namely, CYP, RPL, GAPDH, TUA, TUB, Fe-SOD, Mn-SOD, Cu/Zn-SOD, 18SrRNA, Actin, Histone H3, and EF-1a, were selected. Expression stability of these genes in 150 longan samples was evaluated and analyzed using geNorm and NormFinder algorithms. Preharvest samples consisted of seven experimental sets, including different developmental stages, organs, hormone stimuli (NAA, 2,4-D, and ethephon) and abiotic stresses (bagging and girdling with defoliation). Postharvest samples consisted of different temperature treatments (4 and 22°C) and varieties. Our findings indicate that appropriate reference gene(s) should be picked for each experimental condition. Our data further showed that the commonly used reference gene Actin does not exhibit stable expression across experimental conditions in longan. Expression levels of the DlACO gene, which is a key gene involved in regulating fruit abscission under girdling with defoliation treatment, was evaluated to validate our findings. In conclusion, our data provide a useful framework for choice of suitable reference genes across different experimental conditions for RT-qPCR analysis of preharvest and postharvest longan fruits. PMID:27375640

  18. Improving the Canadian Precipitation Analysis Estimates through an Observing System Simulation Experiment

    NASA Astrophysics Data System (ADS)

    Abbasnezhadi, K.; Rasmussen, P. F.; Stadnyk, T.

    2014-12-01

    To gain a better understanding of the spatiotemporal distribution of rainfall over the Churchill River basin, this study was undertaken. The research incorporates gridded precipitation data from the Canadian Precipitation Analysis (CaPA) system. CaPA has been developed by Environment Canada and provides near real-time precipitation estimates on a 10 km by 10 km grid over North America at a temporal resolution of 6 hours. The spatial fields are generated by combining forecasts from the Global Environmental Multiscale (GEM) model with precipitation observations from the network of synoptic weather stations. CaPA's skill is highly influenced by the number of weather stations in the region of interest as well as by the quality of the observations. In an attempt to evaluate the performance of CaPA as a function of the density of the weather station network, a dual-stage design algorithm to simulate CaPA is proposed which incorporates generated weather fields. More specifically, we are adopting a controlled design algorithm which is generally known as Observing System Simulation Experiment (OSSE). The advantage of using the experiment is that one can define reference precipitation fields assumed to represent the true state of rainfall over the region of interest. In the first stage of the defined OSSE, a coupled stochastic model of precipitation and temperature gridded fields is calibrated and validated. The performance of the generator is then validated by comparing model statistics with observed statistics and by using the generated samples as input to the WATFLOOD™ hydrologic model. In the second stage of the experiment, in order to account for the systematic error of station observations and GEM fields, representative errors are to be added to the reference field using by-products of CaPA's variographic analysis. These by-products explain the variance of station observations and background errors.

  19. Reference-free determination of tissue absorption coefficient by modulation transfer function characterization in spatial frequency domain.

    PubMed

    Chen, Weiting; Zhao, Huijuan; Li, Tongxin; Yan, Panpan; Zhao, Kuanxin; Qi, Caixia; Gao, Feng

    2017-08-08

    Spatial frequency domain (SFD) measurement allows rapid and non-contact wide-field imaging of the tissue optical properties, thus has become a potential tool for assessing physiological parameters and therapeutic responses during photodynamic therapy of skin diseases. The conventional SFD measurement requires a reference measurement within the same experimental scenario as that for a test one to calibrate mismatch between the real measurements and the model predictions. Due to the individual physical and geometrical differences among different tissues, organs and patients, an ideal reference measurement might be unavailable in clinical trials. To address this problem, we present a reference-free SFD determination of absorption coefficient that is based on the modulation transfer function (MTF) characterization. Instead of the absolute amplitude that is used in the conventional SFD approaches, we herein employ the MTF to characterize the propagation of the modulated lights in tissues. With such a dimensionless relative quantity, the measurements can be naturally corresponded to the model predictions without calibrating the illumination intensity. By constructing a three-dimensional database that portrays the MTF as a function of the optical properties (both the absorption coefficient μ a and the reduced scattering coefficient [Formula: see text]) and the spatial frequency, a look-up table approach or a least-square curve-fitting method is readily applied to recover the absorption coefficient from a single frequency or multiple frequencies, respectively. Simulation studies have verified the feasibility of the proposed reference-free method and evaluated its accuracy in the absorption recovery. Experimental validations have been performed on homogeneous tissue-mimicking phantoms with μ a ranging from 0.01 to 0.07 mm -1 and [Formula: see text] = 1.0 or 2.0 mm -1 . The results have shown maximum errors of 4.86 and 7% for [Formula: see text] = 1.0 mm -1 and [Formula: see text] = 2.0 mm -1 , respectively. We have also presented quantitative ex vivo imaging of human lung cancer in a subcutaneous xenograft mouse model for further validation, and observed high absorption contrast in the tumor region. The proposed method can be applied to the rapid and accurate determination of the absorption coefficient, and better yet, in a reference-free way. We believe this reference-free strategy will facilitate the clinical translation of the SFD measurement to achieve enhanced intraoperative hemodynamic monitoring and personalized treatment planning in photodynamic therapy.

  20. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  1. E&V (Evaluation and Validation) Reference Manual, Version 1.0.

    DTIC Science & Technology

    1988-07-01

    references featured in the Reference Manual. G-05097a GENERAL REFERENCE INFORMATION EXTRACTED , FROM * INDEXES AND CROSS REFERENCES CHAPTER 4...at E&V techniques through many different paths, and provides a means to extract useful information along the way. /^c^^s; /r^ ^yr*•**•»» * L...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski . AFWAUAAAF, Wright Patterson AFB, OH 45433-6543. ES-2

  2. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  3. Determination of reference ranges for elements in human scalp hair.

    PubMed

    Druyan, M E; Bass, D; Puchyr, R; Urek, K; Quig, D; Harmon, E; Marquardt, W

    1998-06-01

    Expected values, reference ranges, or reference limits are necessary to enable clinicians to apply analytical chemical data in the delivery of health care. Determination of references ranges is not straightforward in terms of either selecting a reference population or performing statistical analysis. In light of logistical, scientific, and economic obstacles, it is understandable that clinical laboratories often combine approaches in developing health associated reference values. A laboratory may choose to: 1. Validate either reference ranges of other laboratories or published data from clinical research or both, through comparison of patients test data. 2. Base the laboratory's reference values on statistical analysis of results from specimens assayed by the clinical reference laboratory itself. 3. Adopt standards or recommendations of regulatory agencies and governmental bodies. 4. Initiate population studies to validate transferred reference ranges or to determine them anew. Effects of external contamination and anecdotal information from clinicians may be considered. The clinical utility of hair analysis is well accepted for some elements. For others, it remains in the realm of clinical investigation. This article elucidates an approach for establishment of reference ranges for elements in human scalp hair. Observed levels of analytes from hair specimens from both our laboratory's total patient population and from a physician-defined healthy American population have been evaluated. Examination of levels of elements often associated with toxicity serves to exemplify the process of determining reference ranges in hair. In addition the approach serves as a model for setting reference ranges for analytes in a variety of matrices.

  4. Molecular cytogenetic characterization of a familial pericentric inversion 3 associated with short stature.

    PubMed

    Dutta, Usha R; Hansmann, Ingo; Schlote, Dietmar

    2015-03-01

    Short stature refers to the height of an individual which is below expected. The causes are heterogenous and influenced by several genetic and environmental factors. Chromosomal abnormalities are a major cause of diseases and cytogenetic mapping is one of the powerful tools for the identification of novel disease genes. Here we report a three generation family with a heterozygous pericentric inversion of 46, XX, inv(3) (p24.1q26.1) associated with Short stature. Positional cloning strategy was used to physically map the breakpoint regions by Fluorescence in situ hybridization (FISH). Fine mapping was performed with Bacterial Artificial Chromosome (BAC) clones spanning the breakpoint regions. In order to further characterize the breakpoint regions extensive molecular mapping was carried out with the breakpoint spanning BACs which narrowed down the breakpoint region to 2.9 kb and 5.3 kb regions on p and q arm respectively. Although these breakpoints did not disrupt any validated genes, we had identified a novel putative gene in the vicinity of 3q26.1 breakpoint region by in silico analysis. Trying to find the presence of any transcripts of this putative gene we analyzed human total RNA by RT-PCR and identified transcripts containing three new exons confirming the existence of a so far unknown gene close to the 3q breakpoint. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. School Age Populations Research Needs - NCS Dietary Assessment Literature Review

    Cancer.gov

    Drawing conclusions about the validity of available dietary assessment instruments in school age children is hampered by the differences in instruments, research design, reference methods, and populations in the validation literature.

  6. Guideline for translation and national validation of the Quality of Life in Hand Eczema Questionnaire (QOLHEQ).

    PubMed

    Oosterhaven, Jart A F; Schuttelaar, Marie L A; Apfelbacher, Christian; Diepgen, Thomas L; Ofenloch, Robert F

    2017-08-01

    There is a need for well-developed and validated questionnaires to measure patient reported outcomes. The Quality of Life in Hand Eczema Questionnaire (QOLHEQ) is such a validated instrument measuring disease-specific health-related quality of life in hand eczema patients. A re-validation of measurement properties is required before an instrument is used in a new population. With the objective of arriving at a guideline for translation and national validation of the QOLHEQ, we have developed the design of a reference study on how to adequately assess measurement properties of the QOLHEQ based on interdisciplinary discussions and current standards. We present a step-by-step guideline to assess translation (including cross-cultural adaptation), scale structure, validity, reproducibility, responsiveness, and interpretability. We describe which outcomes should be reported for each measurement property, and give advice on how to calculate these. It is also specified which sample size is needed, how to deal with missing data, and which cutoff values should be applied for the measurement properties assessed during the validation process. In conclusion, this guideline, presenting a reference validation study for the QOLHEQ, creates the possibility to harmonize the national validation of the various language versions of the QOLHEQ. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Functional Effects of Genetic Polymorphisms in the N-acetyltransferase 1 Coding and 3′ Untranslated Regions

    PubMed Central

    Zhu, Yuanqi; States, J. Christopher; Wang, Yang; Hein, David W.

    2011-01-01

    BACKGROUND The functional effects of N-acetyltransferase 1 (NAT1) polymorphisms and haplotypes are poorly understood, compromising the validity of associations reported with diseases including birth defects and numerous cancers. METHODS We investigated the effects of genetic polymorphisms within the NAT1 coding region and the 3′-untranslated region (3′-UTR) and their associated haplotypes on N- and O-acetyltransferase catalytic activities, and NAT1 mRNA and protein levels following recombinant expression in COS-1 cells. RESULTS 1088T>A (rs1057126; 3′-UTR) and 1095C>A (rs15561; 3′-UTR) each slightly reduced NAT1 catalytic activity and NAT1 mRNA and protein levels. A 9-base pair (TAATAATAA) deletion between nucleotides 1065-1090 (3′-UTR) reduced NAT1 catalytic activity and NAT1 mRNA and protein levels. In contrast, a 445G>A (rs4987076; V149I), 459G>A (rs4986990; T153T), 640T>G (rs4986783; S214A) coding region haplotype present in NAT1*11 increased NAT1 catalytic activity and NAT1 protein, but not NAT1 mRNA levels. A combination of the 9-base pair (TAATAATAA) deletion and the 445G>A, 459G>A, 640T>G coding region haplotypes, both present in NAT1*11, appeared to neutralize the opposing effects on NAT1 protein and catalytic activity, resulting in levels of NAT1 protein and catalytic activity that did not differ significantly from the NAT1*4 reference. CONCLUSIONS Since 1095C>A (3′-UTR) is the sole polymorphism present in NAT1*3, our data suggests that NAT1*3 is not functionally equivalent to the NAT1*4 reference. Furthermore, our findings provide biological support for reported associations of 1088T>A and 1095C>A polymorphisms with birth defects. PMID:21290563

  8. Relatively slow stochastic gene-state switching in the presence of positive feedback significantly broadens the region of bimodality through stabilizing the uninduced phenotypic state.

    PubMed

    Ge, Hao; Wu, Pingping; Qian, Hong; Xie, Xiaoliang Sunney

    2018-03-01

    Within an isogenic population, even in the same extracellular environment, individual cells can exhibit various phenotypic states. The exact role of stochastic gene-state switching regulating the transition among these phenotypic states in a single cell is not fully understood, especially in the presence of positive feedback. Recent high-precision single-cell measurements showed that, at least in bacteria, switching in gene states is slow relative to the typical rates of active transcription and translation. Hence using the lac operon as an archetype, in such a region of operon-state switching, we present a fluctuating-rate model for this classical gene regulation module, incorporating the more realistic operon-state switching mechanism that was recently elucidated. We found that the positive feedback mechanism induces bistability (referred to as deterministic bistability), and that the parameter range for its occurrence is significantly broadened by stochastic operon-state switching. We further show that in the absence of positive feedback, operon-state switching must be extremely slow to trigger bistability by itself. However, in the presence of positive feedback, which stabilizes the induced state, the relatively slow operon-state switching kinetics within the physiological region are sufficient to stabilize the uninduced state, together generating a broadened parameter region of bistability (referred to as stochastic bistability). We illustrate the opposite phenotype-transition rate dependence upon the operon-state switching rates in the two types of bistability, with the aid of a recently proposed rate formula for fluctuating-rate models. The rate formula also predicts a maximal transition rate in the intermediate region of operon-state switching, which is validated by numerical simulations in our model. Overall, our findings suggest a biological function of transcriptional "variations" among genetically identical cells, for the emergence of bistability and transition between phenotypic states.

  9. Reference Correlation for the Viscosity of Ethane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, Eckhard, E-mail: eckhard.vogel@uni-rostock.de; Span, Roland; Herrmann, Sebastian

    2015-12-15

    A new representation of the viscosity for the fluid phase of ethane includes a zero-density correlation and a contribution for the critical enhancement, initially both developed separately, but based on experimental data. The higher-density contributions are correlated as a function of the reduced density δ = ρ/ρ{sub c} and of the reciprocal reduced temperature τ = T{sub c}/T (ρ{sub c}—critical density and T{sub c}—critical temperature). The final formulation contains 14 coefficients obtained using a state-of-the-art linear optimization algorithm. The evaluation and choice of the selected primary data sets is reviewed, in particular with respect to the assessment used in earliermore » viscosity correlations. The new viscosity surface correlation makes use of the reference equation of state for the thermodynamic properties of ethane by Bücker and Wagner [J. Phys. Chem. Ref. Data 35, 205 (2006)] and is valid in the fluid region from the melting line to temperatures of 675 K and pressures of 100 MPa. The viscosity in the limit of zero density is described with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 290 < T/K < 625, increasing to 1.0% at temperatures down to 212 K. The uncertainty of the correlated values is 1.5% in the range 290 < T/K < 430 at pressures up to 30 MPa on the basis of recent measurements judged to be very reliable as well as 4.0% and 6.0% in further regions. The uncertainty in the near-critical region (1.001 < 1/τ < 1.010 and 0.8 < δ < 1.2) increases with decreasing temperature up to 3.0% considering the available reliable data. Tables of the viscosity calculated from the correlation are listed in an appendix for the single-phase region, for the vapor–liquid phase boundary, and for the near-critical region.« less

  10. Impact of open manganese mines on the health of children dwelling in the surrounding area

    PubMed Central

    Duka, Ykateryna D.; Ilchenko, Svetlana I.; Kharytonov, Mykola M.; Vasylyeva, Tetyana L.

    2011-01-01

    Introduction: Chronic manganese (Mn) exposure is a health hazard associated with the mining and processing of Mn ores. Children living in an area with increased environmental exposure to Mn may have symptoms of chronic toxicity that are different from adults who experience occupational exposure. The aim of the study was to compare health outcomes in a pediatric population living near open Mn mines with a group of children from a reference area and then to develop and implement preventive/rehabilitation measures to protect the children in the mining region. Methods: After environmental assessment, a group of 683 children living in a Mn-rich region of Ukraine were screened by clinical evaluation, detection of sIgA (37 children), micronucleus analysis (56 children), and hair Mn content (166 children). Results: Impaired growth and rickets-like skeletal deformities were observed in 33% of the children. This was a significantly higher percentage than in children in the reference region (15%). The children from the Mn-mining region also had increased salivary levels of immunoglobulin A (104.4±14.2 mcg/ml vs. 49.7±6.1 mcg/ml among the controls (p<0.05), increased serum alpha 1 proteinase inhibitor levels (4.93±0.21 g/l compared with 2.91±0.22 g/l for controls; p<0.001) and greater numbers of micronuclei in the mucous cells of the oral cavity (0.070±0.008 vs. 0.012±0.009, p<0.001). Conclusions: These findings indicate the deleterious health consequences of living in a Mn-mining area. Medical rehabilitation programs were conducted and produced positive results, but further validation of their effectiveness is required. The study provided background information to formulate evidence-based decisions about public health in a region of high Mn exposure. PMID:24149028

  11. Using Satellite Imagery to Quantify Water Quality Impacts and Recovery from Hurricane Harvey

    NASA Astrophysics Data System (ADS)

    Sobel, R. S.; Kiaghadi, A.; Rifai, H. S.

    2017-12-01

    Record rainfall during Hurricane Harvey in the Houston-Galveston region generated record flows containing suspended sediment that was likely contaminated. Conventional water quality monitoring requires resource intensive field campaigns, and produces sparse datasets. In this study, satellite data were used to quantify suspended sediment (TSS) concentrations and mass within the region's estuary system and to estimate sediment deposition and transport. A conservative two band, red-green empirical regression was developed from the Sentinel 2 satellite to calculate TSS concentrations and masses. The regression was calibrated with an R2 = 0.73 (n=28) and validated with an R2 = 0.75 (n=12) using 2016 & 2017 imagery. TSS concentrations four days, 14 days, and 44 days post-storm were compared with a reference condition three days before storm arrival. Results indicated that TSS concentrations were an average of 100% higher four days post-storm, and 150% higher after 14 days, however, the average concentration on day 144 was only seven percent higher than the reference condition, suggesting the estuary system is approaching recovery to pre-storm conditions. Sediment masses were determined from the regressed concentrations and water volumes estimated from a bottom elevation grid combined with water surface elevations observed coincidently with the satellite image. While water volumes were only 13% higher on both day four and day 14 post-storm; sediment masses were 195% and 227% higher than the reference condition, respectively. By day 44, estuary sediment mass returned to just 2.9% above the reference load. From a mechanistic standpoint, the elevated TSS concentrations on day four indicated an advection-based regime due to stormwater runoff draining through the estuarine system. Sometime, however, between days 14 and 44, transport conditions switched from advection-dominated to deposition-driven as indicated by the near normal TSS concentrations on day 44.

  12. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  14. Noninvasive bi-graphical analysis for the quantification of slowly reversible radioligand binding

    NASA Astrophysics Data System (ADS)

    Seo, Seongho; Kim, Su Jin; Yoo, Hye Bin; Lee, Jee-Young; Kyeong Kim, Yu; Lee, Dong Soo; Zhou, Yun; Lee, Jae Sung

    2016-09-01

    In this paper, we presented a novel reference-region-based (noninvasive) bi-graphical analysis for the quantification of a reversible radiotracer binding that may be too slow to reach relative equilibrium (RE) state during positron emission tomography (PET) scans. The proposed method indirectly implements the noninvasive Logan plot, through arithmetic combination of the parameters of two other noninvasive methods and the apparent tissue-to-plasma efflux rate constant for the reference region (k2\\prime ). We investigated its validity and statistical properties, by performing a simulation study with various noise levels and k2\\prime values, and also evaluated its feasibility for [18F]FP-CIT PET in human brain. The results revealed that the proposed approach provides distribution volume ratio estimation comparable to the Logan plot at low noise levels while improving underestimation caused by non-RE state differently depending on k2\\prime . Furthermore, the proposed method was able to avoid noise-induced bias of the Logan plot, and the variability of its results was less dependent on k2\\prime than the Logan plot. Therefore, this approach, without issues related to arterial blood sampling given a pre-estimate of k2\\prime (e.g. population-based), could be useful in parametric image generation for slow kinetic tracers staying in a non-RE state within a PET scan.

  15. Crowd-sourced data collection to support automatic classification of building footprint data

    NASA Astrophysics Data System (ADS)

    Hecht, Robert; Kalla, Matthias; Krüger, Tobias

    2018-05-01

    Human settlements are mainly formed by buildings with their different characteristics and usage. Despite the importance of buildings for the economy and society, complete regional or even national figures of the entire building stock and its spatial distribution are still hardly available. Available digital topographic data sets created by National Mapping Agencies or mapped voluntarily through a crowd via Volunteered Geographic Information (VGI) platforms (e.g. OpenStreetMap) contain building footprint information but often lack additional information on building type, usage, age or number of floors. For this reason, predictive modeling is becoming increasingly important in this context. The capabilities of machine learning allow for the prediction of building types and other building characteristics and thus, the efficient classification and description of the entire building stock of cities and regions. However, such data-driven approaches always require a sufficient amount of ground truth (reference) information for training and validation. The collection of reference data is usually cost-intensive and time-consuming. Experiences from other disciplines have shown that crowdsourcing offers the possibility to support the process of obtaining ground truth data. Therefore, this paper presents the results of an experimental study aiming at assessing the accuracy of non-expert annotations on street view images collected from an internet crowd. The findings provide the basis for a future integration of a crowdsourcing component into the process of land use mapping, particularly the automatic building classification.

  16. Five Years of JOSIE: Assessment of the Performance of Ozone Sondes Under Quasi-Flight Conditions in the Environmental Simulation Chamber With Regard to Satellite Validation

    NASA Astrophysics Data System (ADS)

    Smit, H. G.; Straeter, W.; Helten, M.; Kley, D.

    2002-05-01

    Up to an altitude of about 20 km ozone sondes constitute the most important data source with long term data coverage for the derivation of ozone trends with sufficient vertical resolution, particularly in the important altitude region around the tropopause. In this region and also above in lower/middle stratosphere up to 30-35 km altitude ozone sondes are of crucial importance to validate and evaluate satellite measurements, particularly for their long term stability. Each ozone sounding is made with an individual disposable instrument and, therefore, have to be characterized well prior to flight. Therefore, quality assurance of ozone sonde performance is a pre-requisite. As part of the quality assurance (QA) plan for ozone sondes that are in routine use in the Global Atmosphere Watch program of the World Meteorological Organization the environmental simulation chamber at the Research Centre Juelich (Germany) is established as World Calibration Centre for Ozone Sondes. The facility enables control of pressure, temperature and ozone concentration and can simulate flight conditions of ozone soundings up to an altitude of 35 km, whereby an accurate UV-photometer serves as a reference. In the scope of this QA-plan for ozonesondes since 1996 several JOSIE (= Juelich Ozone Sonde Intercomparison Experiment) experiments to assess the performance of ozone sondes of different types and manufacturers have been conducted at the calibration facility. We will present an overview of the results obtained from the different JOSIE experiments. The results will be discussed with regard to the use of ozone sondes to validate satellite measurements. Special attention will be paid to the influence of operating procedures on the performance of sondes and the need for standardization to assure ozone sounding data of sufficient quality to use for satellite validations.

  17. Clinical Validation of Targeted Next Generation Sequencing for Colon and Lung Cancers

    PubMed Central

    D’Haene, Nicky; Le Mercier, Marie; De Nève, Nancy; Blanchard, Oriane; Delaunoy, Mélanie; El Housni, Hakim; Dessars, Barbara; Heimann, Pierre; Remmelink, Myriam; Demetter, Pieter; Tejpar, Sabine; Salmon, Isabelle

    2015-01-01

    Objective Recently, Next Generation Sequencing (NGS) has begun to supplant other technologies for gene mutation testing that is now required for targeted therapies. However, transfer of NGS technology to clinical daily practice requires validation. Methods We validated the Ion Torrent AmpliSeq Colon and Lung cancer panel interrogating 1850 hotspots in 22 genes using the Ion Torrent Personal Genome Machine. First, we used commercial reference standards that carry mutations at defined allelic frequency (AF). Then, 51 colorectal adenocarcinomas (CRC) and 39 non small cell lung carcinomas (NSCLC) were retrospectively analyzed. Results Sensitivity and accuracy for detecting variants at an AF >4% was 100% for commercial reference standards. Among the 90 cases, 89 (98.9%) were successfully sequenced. Among the 86 samples for which NGS and the reference test were both informative, 83 showed concordant results between NGS and the reference test; i.e. KRAS and BRAF for CRC and EGFR for NSCLC, with the 3 discordant cases each characterized by an AF <10%. Conclusions Overall, the AmpliSeq colon/lung cancer panel was specific and sensitive for mutation analysis of gene panels and can be incorporated into clinical daily practice. PMID:26366557

  18. Validation of reference genes for gene expression studies in soybean aphid, Aphis glycines Matsumura

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time PCR (qRT-PCR) is a common tool for quantifying mRNA transcripts. To normalize results, a reference gene is mandatory. Aphis glycines is a significant soybean pest, yet gene expression and functional genomics studies are hindered by a lack of stable reference genes. We evalu...

  19. POTENTIAL RADIOACTIVE POLLUTANTS RESULTING FROM EXPANDED ENERGY PROGRAMS

    EPA Science Inventory

    An effective environmental monitoring program must have a quality assurance component to assure the production of valid data. Quality assurance has many components: calibration standards, standard reference materials, standard reference methods, interlaboratory comparison studies...

  20. Traveling reference spectroradiometer for routine quality assurance of spectral solar ultraviolet irradiance measurements.

    PubMed

    Gröbner, Julian; Schreder, Josef; Kazadzis, Stelios; Bais, Alkiviadis F; Blumthaler, Mario; Görts, Peter; Tax, Rick; Koskela, Tapani; Seckmeyer, Gunther; Webb, Ann R; Rembges, Diana

    2005-09-01

    A transportable reference spectroradiometer for measuring spectral solar ultraviolet irradiance has been developed and validated. The expanded uncertainty of solar irradiance measurements with this reference spectroradiometer, based on the described methodology, is 8.8% to 4.6%, depending on the wavelength and the solar zenith angle. The accuracy of the spectroradiometer was validated by repeated site visits to two European UV monitoring sites as well as by regular comparisons with the reference spectroradiometer of the European Reference Centre for UV radiation measurements in Ispra, Italy. The spectral solar irradiance measurements of the Quality Assurance of Spectral Ultraviolet Measurements in Europe through the Development of a Transportable Unit (QASUME) spectroradiometer and these three spectroradiometers have agreed to better than 6% during the ten intercomparison campaigns held from 2002 to 2004. If the differences in irradiance scales of as much as 2% are taken into account, the agreement is of the order of 4% over the wavelength range of 300-400 nm.

  1. Spatial averaging of fields from half-wave dipole antennas and corresponding SAR calculations in the NORMAN human voxel model between 65 MHz and 2 GHz.

    PubMed

    Findlay, R P; Dimbylow, P J

    2009-04-21

    If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.

  2. A biological assessment of streams in the eastern United States using a predictive model for macroinvertebrate assemblages

    USGS Publications Warehouse

    Carlisle, D.M.; Meador, M.R.

    2007-01-01

    A predictive model (RIVPACS-type) for benthic macroinvertebrates was constructed to assess the biological condition of 1,087 streams sampled throughout the eastern United States from 1993-2003 as part of the U.S. Geological Survey's National Water-Quality Assessment Program. A subset of 338 sites was designated as reference quality, 28 of which were withheld from model calibration and used to independently evaluate model precision and accuracy. The ratio of observed (O) to expected (E) taxa richness was used as a continuous measure of biological condition, and sites with O/E values <0.8 were classified as biologically degraded. Spatiotemporal variability of O/E values was evaluated with repeated annual and within-site samples at reference sites. Values of O/E were regressed on a measure of urbanization in three regions and compared among streams in different land-use settings. The model accurately predicted the expected taxa at validation sites with high precision (SD = 0.11). Within-site spatial variability in O/E values was much larger than annual and among-site variation at reference sites and was likely caused by environmental differences among sampled reaches. Values of O/E were significantly correlated with basin road density in the Boston, Massachusetts (p < 0.001), Birmingham, Alabama (p = 0.002), and Green Bay, Wisconsin (p = 0.034) metropolitan areas, but the strength of the relations varied among regions. Urban streams were more depleted of taxa than streams in other land-use settings, but larger networks of riparian forest appeared to mediate biological degradation. Taxa that occurred less frequently than predicted by the model were those known to be generally intolerant of a variety of anthropogenic stressors. ?? 2007 American Water Resources Association.

  3. Routine development of objectively derived search strategies.

    PubMed

    Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael

    2012-02-29

    Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.

  4. Performance of a Blood Pressure Smartphone App in Pregnant Women: The iPARR Trial (iPhone App Compared With Standard RR Measurement).

    PubMed

    Raichle, Christina J; Eckstein, Jens; Lapaire, Olav; Leonardi, Licia; Brasier, Noé; Vischer, Annina S; Burkard, Thilo

    2018-06-01

    Hypertensive disorders are one of the leading causes of maternal death worldwide. Several smartphone apps claim to measure blood pressure (BP) using photoplethysmographic signals recorded by smartphone cameras. However, no single app has been validated for this use to date. We aimed to validate a new, promising smartphone algorithm. In this subgroup analysis of the iPARR trial (iPhone App Compared With Standard RR Measurement), we tested the Preventicus BP smartphone algorithm on 32 pregnant women. The trial was conducted based on the European Society of Hypertension International Protocol revision 2010 for validation of BP measuring devices in adults. Each individual received 7 sequential BP measurements starting with the reference device (Omron-HBP-1300) and followed by the smartphone measurement, resulting in 96 BP comparisons. Validation requirements of the European Society of Hypertension International Protocol revision 2010 were not fulfilled. Mean (±SD) systolic BP disagreement between the test and reference devices was 5.0 (±14.5) mm Hg. The number of absolute differences between test and reference device within 5, 10, and 15 mm Hg was 31, 53, and 64 of 96, respectively. A Bland-Altman plot showed an overestimation of smartphone-determined systolic BP in comparison with reference systolic BP in low range but an underestimation in medium-range BP. The Preventicus BP smartphone algorithm failed the accuracy criteria for estimating BP in pregnant women and was thus not commercialized. Pregnant women should be discouraged from using BP smartphone apps, unless there are algorithms specifically validated according to common protocols. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02552030. © 2018 American Heart Association, Inc.

  5. Quadruple Inversion-Recovery b-SSFP MRA of the Abdomen: Initial Clinical Validation

    PubMed Central

    Atanasova, Iliyana P.; Lim, Ruth P.; Chandarana, Hersh; Storey, Pippa; Bruno, Mary T; Kim, Daniel; Lee, Vivian S.

    2014-01-01

    The purpose of this study is to assess the image quality and diagnostic accuracy of non-contrast quadruple inversion-recovery balanced-SSFP MRA (QIR MRA) for detection of aortoiliac disease in a clinical population. QIR MRA was performed in 26 patients referred for routine clinical gadolinium-enhanced MRA (Gd-MRA) for known or suspected aortoiliac disease. Non-contrast images were independently evaluated for image qualityand degree of stenosisby two radiologists, usingconsensus Gd-MRA as the reference standard. Hemodynamically significant stenosis (≥ 50%) was found in 10% (22/226) of all evaluable segments on Gd-MRA. The sensitivity and specificity for stenosis evaluation by QIR MRA for the two readers were 86%/86% and 95%/93% respectively. Negative predictive value and positive predictive value were 98%/98% and 63%/53% respectively. For stenosis evaluation of the aortoiliac region QIR MRA showed good agreement with the reference standard with high negative predictive value and a tendency to overestimate mild disease presumably due to the flow-dependence of the technique. QIR MRA could be a reasonable alternative to Gd-MRA for ruling out stenosis when contrast is contraindicated due to impaired kidney function or in patients who undergo abdominal MRA for screening purposes. Further work is necessary to improve performance and justify routine clinical use. PMID:24998363

  6. High-bandwidth and flexible tracking control for precision motion with application to a piezo nanopositioner.

    PubMed

    Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui

    2017-08-01

    For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.

  7. Past and Future Drought Regimes in Turkey

    NASA Astrophysics Data System (ADS)

    Sen, Burak; Topcu, Sevilay; Turkes, Murat; Sen, Baha

    2010-05-01

    Climate variability in the 20th century was characterized by apparent precipitation variability at both temporal and spatial scales. In addition to the well-known characteristic seasonal and year-to-year variability, some marked and long-term changes in precipitation occurred in Turkey, particularly after the early 1970s. Drought, originating from a deficiency of precipitation over an extended time period (which is usually a season or more) has become a recurring phenomenon in Turkey in the past few decades. Spatially coherent with the significant drought events since early 1970s, water stress and shortages for all water user sectors have also reached their critical points in Turkey. Analyzing the historical occurrence of drought provides an understanding of the range of climate possibilities for a country, resulting in more informed management decision-making. However, future projections about spatial and temporal changes in drought characteristics such as frequency, intensity and duration can be challenging for developing appropriate mitigation and adaptation strategies. Hence, the objectives of this study are (i) to analyze the spatial and temporal dimensions of historical droughts in Turkey, (2) to predict potential intensity, frequency and duration of droughts in Turkey for the future (2070-2100). The Standardized Precipitation Index (SPI) and the Percent to Normal Index (PNI) have been used to assess the drought characteristics. Rainfall datasets for the reference period, 1960-1990, were acquired from 52 stations (representative of all kinds of regions with different rainfall regimes in the country) of the Turkish State Meteorological Service (TSMS). The future rainfall series for the 2070-2100 period were simulated using a regional climate model (RegCM3) for IPCC's SRESS-A2 scenario conditions. For verification of RegCM3 simulations, the model was performed for the reference period and simulated rainfall data were used for computing two drought indices (SPI and PNI) for the 1960-1990 period. Then, to proof the capturing capacity of the RegCM3, these results for the reference period were compared with SPI and PNI values calculated using observed climatic data. The validated climate model was used for performing climatic data for the future 30-year period, and using the projected climate data, the SPI and PNI values were computed for the future conditions, which indicates the drought events within future 30- year period. Furthermore, to determine the likely changes between reference and future periods, the projected future rainfall series was compared with the average rainfall amount derived from the reference period in SPI and PNI calculations. Finally, the maps were drawn to determine the spatial changes of droughts. RegCM3 model could capture the climatic data and also the drought indices well. The study results showed that drought conditions are diverse in the country, and also increasing trends for intensity, frequency and duration were detected. At regional scale, the Eastern part of Marmara, Black Sea Region and northern and eastern parts of the East Anatolia Regions are characterized by wetter conditions. Particularly severe drought conditions are expected in the Western Mediterranean and Aegean Regions, although other regions of the country will also confront with more frequent, intense and long lasting droughts. Both indices SPI and PNI yielded similar results for the reference as well as future period. Most of the rain-fed and irrigated areas as well as the major share of the surface water resources are located in the drought-vulnerable regions of the country. Other water user sectors including urban, industry and touristic places will also be affected from the worsened conditions. Thus, increasing frequency, severity and prolonged duration of drought events may have significant consequences for food production and socio-economic conditions in Turkey.

  8. User Manual for Whisper-1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-01-26

    Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation [1-3]. It uses the sensitivity profile data for an application as computed by MCNP6 [4-6] along with covariance files [7,8] for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014 [3]. During 2015- 2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This document describes the user input and options for running whisper-1.1, including 2 perl utility scripts that simplifymore » ordinary NCS work, whisper_mcnp.pl and whisper_usl.pl. For many detailed references on the theory, applications, nuclear data & covariances, SQA, verification-validation, adjointbased methods for sensitivity-uncertainty analysis, and more – see the Whisper – NCS Validation section of the MCNP Reference Collection at mcnp.lanl.gov. There are currently over 50 Whisper reference documents available.« less

  9. Validation of reference genes for quantifying changes in gene expression in virus-infected tobacco.

    PubMed

    Baek, Eseul; Yoon, Ju-Yeon; Palukaitis, Peter

    2017-10-01

    To facilitate quantification of gene expression changes in virus-infected tobacco plants, eight housekeeping genes were evaluated for their stability of expression during infection by one of three systemically-infecting viruses (cucumber mosaic virus, potato virus X, potato virus Y) or a hypersensitive-response-inducing virus (tobacco mosaic virus; TMV) limited to the inoculated leaf. Five reference-gene validation programs were used to establish the order of the most stable genes for the systemically-infecting viruses as ribosomal protein L25 > β-Tubulin > Actin, and the least stable genes Ubiquitin-conjugating enzyme (UCE) < PP2A < GAPDH. For local infection by TMV, the most stable genes were EF1α > Cysteine protease > Actin, and the least stable genes were GAPDH < PP2A < UCE. Using two of the most stable and the two least stable validated reference genes, three defense responsive genes were examined to compare their relative changes in gene expression caused by each virus. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The shoulder pain and disability index: the construct validity and responsiveness of a region-specific disability measure.

    PubMed

    Heald, S L; Riddle, D L; Lamb, R L

    1997-10-01

    The purposes of this study were (1) to assess the construct validity of the Shoulder Pain and Disability Index (SPADI) and (2) to determine whether the SPADI is more responsive than the Sickness Impact Profile (SIP), a generic health status measure. The sample consisted of 94 patients who were diagnosed with a shoulder problem and referred to six outpatient physical therapy clinics. Clinically meaningful change was determined by use of an ordinal rating scale designed to determine whether the patient's shoulder function was improved, the same, or worse following treatment. Spearman rho correlations were calculated for the initial visit SPADI and SIP scores. The standardized response mean (SRM) was used to measure responsiveness for the patients who were judged to be improved. One-tailed paired t tests (alpha = .01) were used to determine whether differences existed among SRM values. Correlations between the SPADI and SIP scores ranged from r = .01 to r = .57. The SRM value was higher for the SPADI total score (SRM = 1.38) than for the SIP total score (SRM = 0.79). Most correlations between SPADI and SIP scores provided support for the construct validity of the SPADI. The SPADI does not appear to strongly reflect occupational and recreational disability and is more responsive than the SIP.

  11. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  12. Microbiological survey of raw and ready-to-eat leafy green vegetables marketed in Italy.

    PubMed

    Losio, M N; Pavoni, E; Bilei, S; Bertasi, B; Bove, D; Capuano, F; Farneti, S; Blasi, G; Comin, D; Cardamone, C; Decastelli, L; Delibato, E; De Santis, P; Di Pasquale, S; Gattuso, A; Goffredo, E; Fadda, A; Pisanu, M; De Medici, D

    2015-10-01

    The presence of foodborne pathogens (Salmonella spp., Listeria monocytogenes, Escherichia coli O157:H7, thermotolerant Campylobacter, Yersinia enterocolitica and norovirus) in fresh leafy (FL) and ready-to-eat (RTE) vegetable products, sampled at random on the Italian market, was investigated to evaluate the level of risk to consumers. Nine regional laboratories, representing 18 of the 20 regions of Italy and in which 97.7% of the country's population resides, were involved in this study. All laboratories used the same sampling procedures and analytical methods. The vegetable samples were screened using validated real-time PCR (RT-PCR) methods and standardized reference ISO culturing methods. The results show that 3.7% of 1372 fresh leafy vegetable products and 1.8% of 1160 "fresh-cut" or "ready-to-eat" (RTE) vegetable retailed in supermarkets or farm markets, were contaminated with one or more foodborne pathogens harmful to human health. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.

    2016-04-01

    This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.

  14. Improved Multiplex Ligation-dependent Probe Amplification (i-MLPA) for rapid copy number variant (CNV) detection.

    PubMed

    Saxena, Sonal; Gowdhaman, Kavitha; Kkani, Poornima; Vennapusa, Bhavyasri; Rama Subramanian, Chellamuthu; Ganesh Kumar, S; Mohan, Kommu Naga

    2015-10-23

    In Multiplex Ligation-dependent Probe Amplification (MLPA), copy number variants (CNVs) for specific genes are identified after normalization of the amounts of PCR products from ligated reference probes hybridized to genomic regions that are ideally free from normal variation. However, we observed ambiguous calls for two reference probes in an investigation of the human 15q11.2 region by MLPA among 20 controls, due to the presence of single nucleotide polymorphisms (SNPs) in the probe-binding regions. Further in silico analysis revealed that 18 out of 19 reference probes hybridize to regions subject to variation, underlining the requirement for designing new reference probes against variation-free regions. An improved MLPA (i-MLPA) method was developed by generating a new set of reference probes to reduce the chances of ambiguous calls and new reagents that reduce hybridization times to 30 min from 16h to obtain MLPA ratio data within 6h. Using i-MLPA, we screened 240 schizophrenia patients for CNVs in 15q11.2 region. Three deletions and two duplications were identified among the 240 schizophrenia patients. No variation was observed for the new reference probes. Taken together, i-MLPA procedure helps obtaining non-ambiguous CNV calls within 6h without compromising accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Preserved pontine glucose metabolism in Alzheimer disease: A reference region for functional brain image (PET) analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minoshima, Satoshi; Frey, K.A.; Foster, N.L.

    1995-07-01

    Our goal was to examine regional preservation of energy metabolism in Alzheimer disease (AD) and to evaluate effects of PET data normalization to reference regions. Regional metabolic rates in the pons, thalamus, putamen, sensorimotor cortex, visual cortex, and cerebellum (reference regions) were determined stereotaxically and examined in 37 patients with probable AD and 22 normal controls based on quantitative {sup 18}FDG-PET measurements. Following normalization of metabolic rates of the parietotemporal association cortex and whole brain to each reference region, distinctions of the two groups were assessed. The pons showed the best preservation of glucose metabolism in AD. Other reference regionsmore » showed relatively preserved metabolism compared with the parietotemporal association cortex and whole brain, but had significant metabolic reduction. Data normalization to the pons not only enhanced statistical significance of metabolic reduction in the parietotemporal association cortex, but also preserved the presence of global cerebral metabolic reduction indicated in analysis of the quantitative data. Energy metabolism in the pons in probable AD is well preserved. The pons is a reliable reference for data normalization and will enhance diagnostic accuracy and efficiency of quantitative and nonquantitative functional brain imaging. 39 refs., 2 figs., 3 tabs.« less

  16. The Stroop test as a measure of performance validity in adults clinically referred for neuropsychological assessment.

    PubMed

    Erdodi, Laszlo A; Sagar, Sanya; Seke, Kristian; Zuccato, Brandon G; Schwartz, Eben S; Roth, Robert M

    2018-06-01

    This study was designed to develop performance validity indicators embedded within the Delis-Kaplan Executive Function Systems (D-KEFS) version of the Stroop task. Archival data from a mixed clinical sample of 132 patients (50% male; M Age = 43.4; M Education = 14.1) clinically referred for neuropsychological assessment were analyzed. Criterion measures included the Warrington Recognition Memory Test-Words and 2 composites based on several independent validity indicators. An age-corrected scaled score ≤6 on any of the 4 trials reliably differentiated psychometrically defined credible and noncredible response sets with high specificity (.87-.94) and variable sensitivity (.34-.71). An inverted Stroop effect was less sensitive (.14-.29), but comparably specific (.85-90) to invalid performance. Aggregating the newly developed D-KEFS Stroop validity indicators further improved classification accuracy. Failing the validity cutoffs was unrelated to self-reported depression or anxiety. However, it was associated with elevated somatic symptom report. In addition to processing speed and executive function, the D-KEFS version of the Stroop task can function as a measure of performance validity. A multivariate approach to performance validity assessment is generally superior to univariate models. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. The Then and Now of Reference Conditions in Streams of the Central Plains

    NASA Astrophysics Data System (ADS)

    Huggins, D.; Angelo, R.; Baker, D. S.; Welker, G.

    2005-05-01

    Models of contemporary and pre-settlement reference conditions were constructed for streams that once drained the tallgrass prairies of Iowa, Nebraska, Kansas and Missouri (e.g. Western Corn Belt Plains ecoregion), and for streams within the heart of the mixed grass prairie (e.g. Southwestern Tablelands ecoregion). Data on watershed, habitat, chemistry and biology compiled for existing reference streams (least or minimally impacted systems) were used to characterize contemporary reference conditions. Contemporary reference conditions within these two prairie regions are contrasted against hypothetical pre-settlement conditions using information from the best streams (upper 25%) of the current reference population, historical accounts, museum records, natural heritage programs, Public Land Survey and current remote sensing data. Similar comparisons were made between historical and current reference conditions for the Southwestern Tablelands located in central Kansas and Oklahoma. Much of this region remains in mixed grass prairie; has limited hydrological alterations (e.g. impoundments, dewatering) and low human and livestock densities. Within the tablelands these factors have preserved reference conditions that resemble historic conditions. Qualitative and quantitative comparisons indicate that many regions within the Central Plains require caution when using "least disturbed" reference streams and conditions to identify regional biological integrity goals relative to the Clean Water Act.

  18. Identifying Stable Reference Genes for qRT-PCR Normalisation in Gene Expression Studies of Narrow-Leafed Lupin (Lupinus angustifolius L.).

    PubMed

    Taylor, Candy M; Jost, Ricarda; Erskine, William; Nelson, Matthew N

    2016-01-01

    Quantitative Reverse Transcription PCR (qRT-PCR) is currently one of the most popular, high-throughput and sensitive technologies available for quantifying gene expression. Its accurate application depends heavily upon normalisation of gene-of-interest data with reference genes that are uniformly expressed under experimental conditions. The aim of this study was to provide the first validation of reference genes for Lupinus angustifolius (narrow-leafed lupin, a significant grain legume crop) using a selection of seven genes previously trialed as reference genes for the model legume, Medicago truncatula. In a preliminary evaluation, the seven candidate reference genes were assessed on the basis of primer specificity for their respective targeted region, PCR amplification efficiency, and ability to discriminate between cDNA and gDNA. Following this assessment, expression of the three most promising candidates [Ubiquitin C (UBC), Helicase (HEL), and Polypyrimidine tract-binding protein (PTB)] was evaluated using the NormFinder and RefFinder statistical algorithms in two narrow-leafed lupin lines, both with and without vernalisation treatment, and across seven organ types (cotyledons, stem, leaves, shoot apical meristem, flowers, pods and roots) encompassing three developmental stages. UBC was consistently identified as the most stable candidate and has sufficiently uniform expression that it may be used as a sole reference gene under the experimental conditions tested here. However, as organ type and developmental stage were associated with greater variability in relative expression, it is recommended using UBC and HEL as a pair to achieve optimal normalisation. These results highlight the importance of rigorously assessing candidate reference genes for each species across a diverse range of organs and developmental stages. With emerging technologies, such as RNAseq, and the completion of valuable transcriptome data sets, it is possible that other potentially more suitable reference genes will be identified for this species in future.

  19. Identifying Stable Reference Genes for qRT-PCR Normalisation in Gene Expression Studies of Narrow-Leafed Lupin (Lupinus angustifolius L.)

    PubMed Central

    Erskine, William; Nelson, Matthew N.

    2016-01-01

    Quantitative Reverse Transcription PCR (qRT-PCR) is currently one of the most popular, high-throughput and sensitive technologies available for quantifying gene expression. Its accurate application depends heavily upon normalisation of gene-of-interest data with reference genes that are uniformly expressed under experimental conditions. The aim of this study was to provide the first validation of reference genes for Lupinus angustifolius (narrow-leafed lupin, a significant grain legume crop) using a selection of seven genes previously trialed as reference genes for the model legume, Medicago truncatula. In a preliminary evaluation, the seven candidate reference genes were assessed on the basis of primer specificity for their respective targeted region, PCR amplification efficiency, and ability to discriminate between cDNA and gDNA. Following this assessment, expression of the three most promising candidates [Ubiquitin C (UBC), Helicase (HEL), and Polypyrimidine tract-binding protein (PTB)] was evaluated using the NormFinder and RefFinder statistical algorithms in two narrow-leafed lupin lines, both with and without vernalisation treatment, and across seven organ types (cotyledons, stem, leaves, shoot apical meristem, flowers, pods and roots) encompassing three developmental stages. UBC was consistently identified as the most stable candidate and has sufficiently uniform expression that it may be used as a sole reference gene under the experimental conditions tested here. However, as organ type and developmental stage were associated with greater variability in relative expression, it is recommended using UBC and HEL as a pair to achieve optimal normalisation. These results highlight the importance of rigorously assessing candidate reference genes for each species across a diverse range of organs and developmental stages. With emerging technologies, such as RNAseq, and the completion of valuable transcriptome data sets, it is possible that other potentially more suitable reference genes will be identified for this species in future. PMID:26872362

  20. The establishment of tocopherol reference intervals for Hungarian adult population using a validated HPLC method.

    PubMed

    Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes

    2017-09-01

    Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Evaluation of new antibiotic cocktails against contaminating bacteria found in allograft tissues.

    PubMed

    Serafini, Agnese; Riello, Erika; Trojan, Diletta; Cogliati, Elisa; Palù, Giorgio; Manganelli, Riccardo; Paolin, Adolfo

    2016-12-01

    Contamination of retrieved tissues is a major problem for allograft safety. Consequently, tissue banks have implemented decontamination protocols to eliminate microorganisms from tissues. Despite the widespread adoption of these protocols, few comprehensive studies validating such methods have been published. In this manuscript we compare the bactericidal activity of different antibiotic cocktails at different temperatures against a panel of bacterial species frequently isolated in allograft tissues collected at the Treviso Tissue Bank Foundation, a reference organization of the Veneto Region in Italy that was instituted to select, recover, process, store and distribute human tissues. We were able to identify at least two different formulations capable of killing most of the bacteria during prolonged incubation at 4 °C.

  2. Effects of non-homogeneous flow on ADCP data processing in a hydroturbine forebay

    DOE PAGES

    Harding, S. F.; Richmond, M. C.; Romero-Gomez, P.; ...

    2016-01-02

    Accurate modeling of the velocity field in the forebay of a hydroelectric power station is important for both power generation and fish passage, and is able to be increasingly well represented by computational fluid dynamics (CFD) simulations. Acoustic Doppler Current Profiler (ADCP) are investigated herein as a method of validating the numerical flow solutions, particularly in observed and calculated regions of non-homogeneous flow velocity. By using a numerical model of an ADCP operating in a velocity field calculated using CFD, the errors due to the spatial variation of the flow velocity are quantified. Furthermore, the numerical model of the ADCPmore » is referred to herein as a Virtual ADCP (VADCP).« less

  3. The PMDB Protein Model Database

    PubMed Central

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  4. Design of a cavity ring-down spectroscopy diagnostic for negative ion rf source SPIDER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualotto, R.; Alfier, A.; Lotto, L.

    2010-10-15

    The rf source test facility SPIDER will test and optimize the source of the 1 MV neutral beam injection systems for ITER. Cavity ring-down spectroscopy (CRDS) will measure the absolute line-of-sight integrated density of negative (H{sup -} and D{sup -}) ions, produced in the extraction region of the source. CRDS takes advantage of the photodetachment process: negative ions are converted to neutral hydrogen atoms by electron stripping through absorption of a photon from a laser. The design of this diagnostic is presented with the corresponding simulation of the expected performance. A prototype operated without plasma has provided CRDS reference signals,more » design validation, and results concerning the signal-to-noise ratio.« less

  5. Blind source separation for ambulatory sleep recording

    PubMed Central

    Porée, Fabienne; Kachenoura, Amar; Gauvrit, Hervé; Morvan, Catherine; Carrault, Guy; Senhadji, Lotfi

    2006-01-01

    This paper deals with the conception of a new system for sleep staging in ambulatory conditions. Sleep recording is performed by means of five electrodes: two temporal, two frontal and a reference. This configuration enables to avoid the chin area to enhance the quality of the muscular signal and the hair region for patient convenience. The EEG, EMG and EOG signals are separated using the Independent Component Analysis approach. The system is compared to a standard sleep analysis system using polysomnographic recordings of 14 patients. The overall concordance of 67.2% is achieved between the two systems. Based on the validation results and the computational efficiency we recommend the clinical use of the proposed system in a commercial sleep analysis platform. PMID:16617618

  6. Cross-cultural adaptation of the German version of the spinal stenosis measure.

    PubMed

    Wertli, Maria M; Steurer, Johann; Wildi, Lukas M; Held, Ulrike

    2014-06-01

    To validate the German version of the spinal stenosis measure (SSM), a disease-specific questionnaire assessing symptom severity, physical function, and satisfaction with treatment in patients with lumbar spinal stenosis. After translation, cross-cultural adaptation, and pilot testing, we assessed internal consistency, test-retest reliability, construct validity, and responsiveness of the SSM subscales. Data from a large Swiss multi-center prospective cohort study were used. Reference scales for the assessment of construct validity and responsiveness were the numeric rating scale, pain thermometer, and the Roland Morris Disability Questionnaire. One hundred and eight consecutive patients were included in this validation study, recruited from five different centers. Cronbach's alpha was above 0.8 for all three subscales of the SSM. The objectivity of the SSM was assessed using a partial credit approach. The model showed a good global fit to the data. Of the 108 patients 78 participated in the test-retest procedure. The ICC values were above 0.8 for all three subscales of the SSM. Correlations with reference scales were above 0.7 for the symptom and function subscales. For satisfaction subscale, it was 0.66 or above. Clinically meaningful changes of the reference scales over time were associated with significantly more improvement in all three SSM subscales (p < 0.001). Conclusion: The proposed version of the SSM showed very good measurement properties and can be considered validated for use in the German language.

  7. Measurement of regional compliance using 4DCT images for assessment of radiation treatment1

    PubMed Central

    Zhong, Hualiang; Jin, Jian-yue; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J.

    2011-01-01

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaled to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by Pi, i=1,…,5, respectively. The average lung volume variation relative to the reference phase (P5) was reduced from 18% at P1 to 4.8% at P4. The average stress at phase Pi was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,…,4, respectively. For the other five patients, their average Rv value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung’s elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy. PMID:21520868

  8. Measurement of regional compliance using 4DCT images for assessment of radiation treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Jin Jianyue; Ajlouni, Munther

    2011-03-15

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaledmore » to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by P{sub i}, i=1,...,5, respectively. The average lung volume variation relative to the reference phase (P{sub 5}) was reduced from 18% at P{sub 1} to 4.8% at P{sub 4}. The average stress at phase P{sub i} was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,...,4, respectively. For the other five patients, their average R{sub v} value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung's elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy.« less

  9. [Geographical distribution of the Serum creatinine reference values of healthy adults].

    PubMed

    Wei, De-Zhi; Ge, Miao; Wang, Cong-Xia; Lin, Qian-Yi; Li, Meng-Jiao; Li, Peng

    2016-11-20

    To explore the relationship between serum creatinine (Scr) reference values in healthy adults and geographic factors and provide evidence for establishing Scr reference values in different regions. We collected 29 697 Scr reference values from healthy adults measured by 347 medical facilities from 23 provinces, 4 municipalities and 5 autonomous regions. We chose 23 geographical factors and analyzed their correlation with Scr reference values to identify the factors correlated significantly with Scr reference values. According to the Principal component analysis and Ridge regression analysis, two predictive models were constructed and the optimal model was chosen after comparison of the two model's fitting degree of predicted results and measured results. The distribution map of Scr reference values was drawn using the Kriging interpolation method. Seven geographic factors, including latitude, annual sunshine duration, annual average temperature, annual average relative humidity, annual precipitation, annual temperature range and topsoil (silt) cation exchange capacity were found to correlate significantly with Scr reference values. The overall distribution of Scr reference values featured a pattern that the values were high in the south and low in the north, varying consistently with the latitude change. The data of the geographic factors in a given region allows the prediction of the Scr values in healthy adults. Analysis of these geographical factors can facilitate the determination of the reference values specific to a region to improve the accuracy for clinical diagnoses.

  10. Reverse transcription quantitative real-time polymerase chain reaction reference genes in the spared nerve injury model of neuropathic pain: validation and literature search.

    PubMed

    Piller, Nicolas; Decosterd, Isabelle; Suter, Marc R

    2013-07-10

    The reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) is a widely used, highly sensitive laboratory technique to rapidly and easily detect, identify and quantify gene expression. Reliable RT-qPCR data necessitates accurate normalization with validated control genes (reference genes) whose expression is constant in all studied conditions. This stability has to be demonstrated.We performed a literature search for studies using quantitative or semi-quantitative PCR in the rat spared nerve injury (SNI) model of neuropathic pain to verify whether any reference genes had previously been validated. We then analyzed the stability over time of 7 commonly used reference genes in the nervous system - specifically in the spinal cord dorsal horn and the dorsal root ganglion (DRG). These were: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) and L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and hydroxymethylbilane synthase (HMBS). We compared the candidate genes and established a stability ranking using the geNorm algorithm. Finally, we assessed the number of reference genes necessary for accurate normalization in this neuropathic pain model. We found GAPDH, HMBS, Actb, HPRT1 and 18S cited as reference genes in literature on studies using the SNI model. Only HPRT1 and 18S had been once previously demonstrated as stable in RT-qPCR arrays. All the genes tested in this study, using the geNorm algorithm, presented gene stability values (M-value) acceptable enough for them to qualify as potential reference genes in both DRG and spinal cord. Using the coefficient of variation, 18S failed the 50% cut-off with a value of 61% in the DRG. The two most stable genes in the dorsal horn were RPL29 and RPL13a; in the DRG they were HPRT1 and Actb. Using a 0.15 cut-off for pairwise variations we found that any pair of stable reference gene was sufficient for the normalization process. In the rat SNI model, we validated and ranked Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 and 18S as good reference genes in the spinal cord. In the DRG, 18S did not fulfill stability criteria. The combination of any two stable reference genes was sufficient to provide an accurate normalization.

  11. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.

  12. Evolution of an Implementation-Ready Interprofessional Pain Assessment Reference Model

    PubMed Central

    Collins, Sarah A; Bavuso, Karen; Swenson, Mary; Suchecki, Christine; Mar, Perry; Rocha, Roberto A.

    2017-01-01

    Standards to increase consistency of comprehensive pain assessments are important for safety, quality, and analytics activities, including meeting Joint Commission requirements and learning the best management strategies and interventions for the current prescription Opioid epidemic. In this study we describe the development and validation of a Pain Assessment Reference Model ready for implementation on EHR forms and flowsheets. Our process resulted in 5 successive revisions of the reference model, which more than doubled the number of data elements to 47. The organization of the model evolved during validation sessions with panels totaling 48 subject matter experts (SMEs) to include 9 sets of data elements, with one set recommended as a minimal data set. The reference model also evolved when implemented into EHR forms and flowsheets, indicating specifications such as cascading logic that are important to inform secondary use of data. PMID:29854125

  13. Validating the weight gain of preterm infants between the reference growth curve of the fetus and the term infant

    PubMed Central

    2013-01-01

    Background Current fetal-infant growth references have an obvious growth disjuncture around 40 week gestation overlapping where the fetal and infant growth references are combined. Graphical smoothening of the disjuncture to connect the matching percentile curves has never been validated. This study was designed to compare weight gain patterns of contemporary preterm infants with a fetal-infant growth reference (derived from a meta-analysis) to validate the previous smoothening assumptions and inform the revision of the Fenton chart. Methods Growth and descriptive data of preterm infants (23 to 31 weeks) from birth through 10 weeks post term age were collected in three cities in Canada and the USA between 2001 and 2010 (n = 977). Preterm infants were grouped by gestational age into 23–25, 26–28, and 29–31 weeks. Comparisons were made between the weight data of the preterm cohort and the fetal-infant growth reference. Results Median weight gain curves of the three preterm gestational age groups were almost identical and remained between the 3rd and the 50th percentiles of the fetal-infant-growth-reference from birth through 10 weeks post term. The growth velocity of the preterm infants decreased in a pattern similar to the decreased velocity of the fetus and term infant estimates, from a high of 17–18 g/kg/day between 31–34 weeks to rates of 4–5 g/kg/day by 50 weeks in each gestational age group. The greatest discrepancy in weight gain velocity between the preterm infants and the fetal estimate was between 37 and 40 weeks; preterm infants grew more rapidly than the fetus. The infants in this study regained their birthweight earlier compared to those in the 1999 National Institute of Child Health and Human Development report. Conclusion The weight gain velocity of preterm infants through the period of growth data disjuncture between 37 and 50 weeks gestation is consistent with and thus validates the smoothening assumptions made between preterm and post-term growth references. PMID:23758808

  14. Validating the weight gain of preterm infants between the reference growth curve of the fetus and the term infant.

    PubMed

    Fenton, Tanis R; Nasser, Roseann; Eliasziw, Misha; Kim, Jae H; Bilan, Denise; Sauve, Reg

    2013-06-11

    Current fetal-infant growth references have an obvious growth disjuncture around 40 week gestation overlapping where the fetal and infant growth references are combined. Graphical smoothening of the disjuncture to connect the matching percentile curves has never been validated. This study was designed to compare weight gain patterns of contemporary preterm infants with a fetal-infant growth reference (derived from a meta-analysis) to validate the previous smoothening assumptions and inform the revision of the Fenton chart. Growth and descriptive data of preterm infants (23 to 31 weeks) from birth through 10 weeks post term age were collected in three cities in Canada and the USA between 2001 and 2010 (n = 977). Preterm infants were grouped by gestational age into 23-25, 26-28, and 29-31 weeks. Comparisons were made between the weight data of the preterm cohort and the fetal-infant growth reference. Median weight gain curves of the three preterm gestational age groups were almost identical and remained between the 3rd and the 50th percentiles of the fetal-infant-growth-reference from birth through 10 weeks post term. The growth velocity of the preterm infants decreased in a pattern similar to the decreased velocity of the fetus and term infant estimates, from a high of 17-18 g/kg/day between 31-34 weeks to rates of 4-5 g/kg/day by 50 weeks in each gestational age group. The greatest discrepancy in weight gain velocity between the preterm infants and the fetal estimate was between 37 and 40 weeks; preterm infants grew more rapidly than the fetus. The infants in this study regained their birthweight earlier compared to those in the 1999 National Institute of Child Health and Human Development report. The weight gain velocity of preterm infants through the period of growth data disjuncture between 37 and 50 weeks gestation is consistent with and thus validates the smoothening assumptions made between preterm and post-term growth references.

  15. Standard Specimen Reference Set: Colon — EDRN Public Portal

    Cancer.gov

    The Early Detection Research Network, Great Lakes-New England Clinical, Epidemiological and Validation Center (GLNE CVC) announces the availability of serum, plasma and urine samples for the early detection for colon cancer and validation studies.

  16. Multidate remote sensing approaches for digital zoning of terroirs at regional scales: case studies revisited and perspectives

    NASA Astrophysics Data System (ADS)

    Vaudour, Emmanuelle; Carey, Victoria A.; Gilliot, Jean-Marc

    2014-05-01

    Geospatial technologies prove more and more useful for characterizing terroirs and this, not only at the within-field scale: amongst innovating technologies revolutionizing approaches for digitally zoning viticultural areas, be they managed by individual or cooperative grape growers, or even unions of grape growers, multispectral satellite remote sensing data have been used for 15 years already at either regional or whole-vineyard scale, starting from single date-studies to multi-temporal processings. Regional remotely-sensed approaches for terroir mapping mostly use multispectral satellite images in conjunction with a set of ancillary morphometric and/or geomorphological and/or legacy soil data and time series data on grape/wine quality and climate. Two prominent case-studies of regional terroir mapping using SPOT satellite images with medium spatial resolution (20 m) were carried out in the Southern Rhone Valley (Côtes-du-Rhône controlled Appelation of origin) in Southern France and in the Stellenbosch-Paarl region (including 5 Wine of Origin wards: Simonsberg-Stellenbosch, Simonsberg-Paarl, Jonkershoek Valley, Banghoek and Papegaaiberg and portions of two further wards, namely, Franschoek and Devon Valley) in the South Western Cape of South Africa. In addition to emphasizing their usefulness for operational land management, our objective was to develop, compare and discuss both approaches in terms of formalization, spatial data handling and processing, sampling design, validation procedures and/or availability of uncertainty information. Both approaches essentially relied on supervised image classifiers based on the selection of reference training areas. For the Southern Rhone valley, viticultural terroirs were validated using an external sample of 91 vineyards planted with Grenache Noir and Syrah for which grape composition was available over a large 17 years-period: the validation procedure highlighted a strong vintage effect for each specific terroir. The output map was appropriate at the scale of cooperative wineries and the scale of the union of grapegrowers. For the Stellenbosch-Paarl region, 55 Sauvignon Blanc vineyards previously characterized in terms of grape/vine/wine quality in several earlier studies were used to introduce expert knowledge as a basis for bootstrapped regression tree calculations, which enabled uncertainty assessment of final map results. Further perspectives related to the spatial monitoring of vine phenology according to the output terroir units and the possible characterization of both within/between terroir spatio-temporal variability of vegetative growth were initiated for the Southern Rhone terroirs considering a SPOT4-Take Five satellite time series acquired from February to June 2013 in the framework of the SPOT4-Take Five program of the French Space Agency (CNES).

  17. A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed

    2018-06-02

    The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.

  18. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    DOEpatents

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  19. Self-perceived health among 'quilombolas' in northern Minas Gerais, Brazil.

    PubMed

    Oliveira, Stéphany Ketllin Mendes; Pereira, Mayane Moura; Guimarães, Luiz Sena; Caldeira, Antônio Prates

    2015-09-01

    Over a century has passed since slavery was abolished in Brazil, yet quilombola communities remain socially vulnerable, especially when it comes to health. The goal of this study was to understand self-perceived health (SPH) in quilombola communities in Northern Minas Gerais, and the factors associated with their negative -perceived their own health. A household survey of a representative sample of quilombola communities in the study region. Validated tools were used to gather data about SPH, socioeconomic conditions, demographics, lifestyle and self-referred morbidity. Following a bivariate analysis, we proceeded to conduct a hierarchical logistics regression analysis. The prevalence of negative SPH was 46.0%. The following variables were statistically associated with negative SPH: age and years of schooling as distal variables, and high blood pressure, diabetes, arthritis, depression and back problems as proximal variables. SPH is associated with demographic and socioeconomic dimensions, and in particular with self-referred morbidity. The concept of health among the quilombola communities included in this study seems to be intimately linked to the absence of disease, especially chronic disease.

  20. Spectral characterization of near-infrared acousto-optic tunable filter (AOTF) hyperspectral imaging systems using standard calibration materials.

    PubMed

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2011-04-01

    In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy

  1. Reference correlation of the thermal conductivity of carbon dioxide from the triple point to 1100 K and up to 200 MPa

    DOE PAGES

    Huber, M. L.; Sykioti, E. A.; Assael, M. J.; ...

    2016-02-25

    This article contains new, representative reference equations for the thermal conductivity of carbon dioxide. The equations are based in part upon a body of experimental data that has been critically assessed for internal consistency and for agreement with theory whenever possible. In the case of the dilute-gas thermal conductivity, we incorporated recent theoretical calculations to extend the temperature range of the experimental data. Moreover, in the critical region, the experimentally observed enhancement of the thermal conductivity is well represented by theoretically based equations containing just one adjustable parameter. The correlation is applicable for the temperature range from the triple pointmore » to 1100 K and pressures up to 200 MPa. Lastly, the overall uncertainty (at the 95% confidence level) of the proposed correlation varies depending on the state point from a low of 1% at very low pressures below 0.1 MPa between 300 and 700 K, to 5% at the higher pressures of the range of validity.« less

  2. Soil Microbial Functional and Fungal Diversity as Influenced by Municipal Sewage Sludge Accumulation

    PubMed Central

    Frąc, Magdalena; Oszust, Karolina; Lipiec, Jerzy; Jezierska-Tys, Stefania; Nwaichi, Eucharia Oluchi

    2014-01-01

    Safe disposal of municipal sewage sludge is a challenging global environmental concern. The aim of this study was to assess the response of soil microbial functional diversity to the accumulation of municipal sewage sludge during landfill storage. Soil samples of a municipal sewage sludge (SS) and from a sewage sludge landfill that was 3 m from a SS landfill (SS3) were analyzed relative to an undisturbed reference soil. Biolog EcoPlatesTM were inoculated with a soil suspension, and the Average Well Color Development (AWCD), Richness (R) and Shannon-Weaver index (H) were calculated to interpret the results. The fungi isolated from the sewage sludge were identified using comparative rDNA sequencing of the LSU D2 region. The MicroSEQ® ID software was used to assess the raw sequence files, perform sequence matching to the MicroSEQ® ID-validated reference database and create Neighbor-Joining trees. Moreover, the genera of fungi isolated from the soil were identified using microscopic methods. Municipal sewage sludge can serve as a habitat for plant pathogens and as a source of pathogen strains for biotechnological applications. PMID:25170681

  3. Soil microbial functional and fungal diversity as influenced by municipal sewage sludge accumulation.

    PubMed

    Frąc, Magdalena; Oszust, Karolina; Lipiec, Jerzy; Jezierska-Tys, Stefania; Nwaichi, Eucharia Oluchi

    2014-08-28

    Safe disposal of municipal sewage sludge is a challenging global environmental concern. The aim of this study was to assess the response of soil microbial functional diversity to the accumulation of municipal sewage sludge during landfill storage. Soil samples of a municipal sewage sludge (SS) and from a sewage sludge landfill that was 3 m from a SS landfill (SS3) were analyzed relative to an undisturbed reference soil. Biolog EcoPlatesTM were inoculated with a soil suspension, and the Average Well Color Development (AWCD), Richness (R) and Shannon-Weaver index (H) were calculated to interpret the results. The fungi isolated from the sewage sludge were identified using comparative rDNA sequencing of the LSU D2 region. The MicroSEQ® ID software was used to assess the raw sequence files, perform sequence matching to the MicroSEQ® ID-validated reference database and create Neighbor-Joining trees. Moreover, the genera of fungi isolated from the soil were identified using microscopic methods. Municipal sewage sludge can serve as a habitat for plant pathogens and as a source of pathogen strains for biotechnological applications.

  4. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide extraction workflow can be initiated through the functionality that the web service provides: (1) a segmentation of the image into spectrally homogeneous objects, (2) a classification of the objects into landslide and non-landslide areas and (3) an editing tool for the manual refinement of extracted landslide boundaries. In addition, the user interface of the web service provides tools that enable the user (4) to perform a monitoring that identifies changes between landslide maps of different points in time, (5) to perform a validation of the landslide maps by comparing them to reference data, and (6) to perform an assessment of affected infrastructure by comparing the landslide maps to respective infrastructure data. After exploring the web service functionality, the users are asked to fill in the online validation protocol in form of a questionnaire in order to provide their feedback. Concerning usability, we evaluate how intuitive the web service functionality can be operated, how well the integrated help information guides the users, and what kind of background information, e.g. remote sensing concepts and theory, is necessary for a practitioner to fully exploit the value of EO data. The feedback will be used for improving the user interface and for the implementation of additional functionality.

  5. Discovery of common sequences absent in the human reference genome using pooled samples from next generation sequencing.

    PubMed

    Liu, Yu; Koyutürk, Mehmet; Maxwell, Sean; Xiang, Min; Veigl, Martina; Cooper, Richard S; Tayo, Bamidele O; Li, Li; LaFramboise, Thomas; Wang, Zhenghe; Zhu, Xiaofeng; Chance, Mark R

    2014-08-16

    Sequences up to several megabases in length have been found to be present in individual genomes but absent in the human reference genome. These sequences may be common in populations, and their absence in the reference genome may indicate rare variants in the genomes of individuals who served as donors for the human genome project. As the reference genome is used in probe design for microarray technology and mapping short reads in next generation sequencing (NGS), this missing sequence could be a source of bias in functional genomic studies and variant analysis. One End Anchor (OEA) and/or orphan reads from paired-end sequencing have been used to identify novel sequences that are absent in reference genome. However, there is no study to investigate the distribution, evolution and functionality of those sequences in human populations. To systematically identify and study the missing common sequences (micSeqs), we extended the previous method by pooling OEA reads from large number of individuals and applying strict filtering methods to remove false sequences. The pipeline was applied to data from phase 1 of the 1000 Genomes Project. We identified 309 micSeqs that are present in at least 1% of the human population, but absent in the reference genome. We confirmed 76% of these 309 micSeqs by comparison to other primate genomes, individual human genomes, and gene expression data. Furthermore, we randomly selected fifteen micSeqs and confirmed their presence using PCR validation in 38 additional individuals. Functional analysis using published RNA-seq and ChIP-seq data showed that eleven micSeqs are highly expressed in human brain and three micSeqs contain transcription factor (TF) binding regions, suggesting they are functional elements. In addition, the identified micSeqs are absent in non-primates and show dynamic acquisition during primate evolution culminating with most micSeqs being present in Africans, suggesting some micSeqs may be important sources of human diversity. 76% of micSeqs were confirmed by a comparative genomics approach. Fourteen micSeqs are expressed in human brain or contain TF binding regions. Some micSeqs are primate-specific, conserved and may play a role in the evolution of primates.

  6. The Multiple-Use of Accountability Assessments: Implications for the Process of Validation

    ERIC Educational Resources Information Center

    Koch, Martha J.

    2014-01-01

    Implications of the multiple-use of accountability assessments for the process of validation are examined. Multiple-use refers to the simultaneous use of results from a single administration of an assessment for its intended use and for one or more additional uses. A theoretical discussion of the issues for validation which emerge from…

  7. Quantitative comparison of DNA methylation assays for biomarker development and clinical applications.

    PubMed

    2016-07-01

    DNA methylation patterns are altered in numerous diseases and often correlate with clinically relevant information such as disease subtypes, prognosis and drug response. With suitable assays and after validation in large cohorts, such associations can be exploited for clinical diagnostics and personalized treatment decisions. Here we describe the results of a community-wide benchmarking study comparing the performance of all widely used methods for DNA methylation analysis that are compatible with routine clinical use. We shipped 32 reference samples to 18 laboratories in seven different countries. Researchers in those laboratories collectively contributed 21 locus-specific assays for an average of 27 predefined genomic regions, as well as six global assays. We evaluated assay sensitivity on low-input samples and assessed the assays' ability to discriminate between cell types. Good agreement was observed across all tested methods, with amplicon bisulfite sequencing and bisulfite pyrosequencing showing the best all-round performance. Our technology comparison can inform the selection, optimization and use of DNA methylation assays in large-scale validation studies, biomarker development and clinical diagnostics.

  8. Combining FT-IR spectroscopy and multivariate analysis for qualitative and quantitative analysis of the cell wall composition changes during apples development.

    PubMed

    Szymanska-Chargot, M; Chylinska, M; Kruk, B; Zdunek, A

    2015-01-22

    The aim of this work was to quantitatively and qualitatively determine the composition of the cell wall material from apples during development by means of Fourier transform infrared (FT-IR) spectroscopy. The FT-IR region of 1500-800 cm(-1), containing characteristic bands for galacturonic acid, hemicellulose and cellulose, was examined using principal component analysis (PCA), k-means clustering and partial least squares (PLS). The samples were differentiated by development stage and cultivar using PCA and k-means clustering. PLS calibration models for galacturonic acid, hemicellulose and cellulose content from FT-IR spectra were developed and validated with the reference data. PLS models were tested using the root-mean-square errors of cross-validation for contents of galacturonic acid, hemicellulose and cellulose which was 8.30 mg/g, 4.08% and 1.74%, respectively. It was proven that FT-IR spectroscopy combined with chemometric methods has potential for fast and reliable determination of the main constituents of fruit cell walls. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  10. Development of real-time PCR method for the detection and the quantification of a new endogenous reference gene in sugar beet "Beta vulgaris L.": GMO application.

    PubMed

    Chaouachi, Maher; Alaya, Akram; Ali, Imen Ben Haj; Hafsa, Ahmed Ben; Nabi, Nesrine; Bérard, Aurélie; Romaniuk, Marcel; Skhiri, Fethia; Saïd, Khaled

    2013-01-01

    KEY MESSAGE : Here, we describe a new developed quantitative real-time PCR method for the detection and quantification of a new specific endogenous reference gene used in GMO analysis. The key requirement of this study was the identification of a new reference gene used for the differentiation of the four genomic sections of the sugar beet (Beta vulgaris L.) (Beta, Corrollinae, Nanae and Procumbentes) suitable for quantification of genetically modified sugar beet. A specific qualitative polymerase chain reaction (PCR) assay was designed to detect the sugar beet amplifying a region of the adenylate transporter (ant) gene only from the species of the genomic section I of the genus Beta (cultivated and wild relatives) and showing negative PCR results for 7 species of the 3 other sections, 8 related species and 20 non-sugar beet plants. The sensitivity of the assay was 15 haploid genome copies (HGC). A quantitative real-time polymerase chain reaction (QRT-PCR) assay was also performed, having high linearity (R (2) > 0.994) over sugar beet standard concentrations ranging from 20,000 to 10 HGC of the sugar beet DNA per PCR. The QRT-PCR assay described in this study was specific and more sensitive for sugar beet quantification compared to the validated test previously reported in the European Reference Laboratory. This assay is suitable for GMO quantification in routine analysis from a wide variety of matrices.

  11. A Modular Approach to Model Oscillating Control Surfaces Using Navier Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Lee, Henry

    2014-01-01

    The use of active controls for rotorcraft is becoming more important for modern aerospace configurations. Efforts to reduce the vibrations of helicopter blades with use of active-controls are in progress. Modeling oscillating control surfaces using the linear aerodynamics theory is well established. However, higher-fidelity methods are needed to account for nonlinear effects, such as those that occur in transonic flow. The aeroelastic responses of a wing with an oscillating control surface, computed using the transonic small perturbation (TSP) theory, have been shown to cause important transonic flow effects such as a reversal of control surface effectiveness that occurs as the shock wave crosses the hinge line. In order to account for flow complexities such as blade-vortex interactions of rotor blades higher-fidelity methods based on the Navier-Stokes equations are used. Reference 6 presents a procedure that uses the Navier-Stokes equations with moving-sheared grids and demonstrates up to 8 degrees of control-surface amplitude, using a single grid. Later, this procedure was extended to accommodate larger amplitudes, based on sliding grid zones. The sheared grid method implemented in EulerlNavier-Stokes-based aeroelastic code ENS AERO was successfully applied to active control design by industry. Recently there are several papers that present results for oscillating control surface using Reynolds Averaged Navier-Stokes (RANS) equations. References 9 and 10 report 2-D cases by filling gaps with overset grids. Reference 9 compares integrated forces with the experiment at low oscillating frequencies whereas Ref. 10 reports parametric studies but with no validation. Reference II reports results for a 3D case by modeling the gap region with a deformed grid and compares force results with the experiment only at the mid-span of flap. In Ref. II grid is deformed to match the control surface deflections at the section where the measurements are made. However, there is no indication in Ref. II that the gaps are explicitly modeled as in Ref. 6. Computations using overset grids are reported in Ref. 12 for a case by adding moving control surface to an existing blade but with no validation either with an experiment or another computation.

  12. SDCLIREF - A sub-daily gridded reference dataset

    NASA Astrophysics Data System (ADS)

    Wood, Raul R.; Willkofer, Florian; Schmid, Franz-Josef; Trentini, Fabian; Komischke, Holger; Ludwig, Ralf

    2017-04-01

    Climate change is expected to impact the intensity and frequency of hydrometeorological extreme events. In order to adequately capture and analyze extreme rainfall events, in particular when assessing flood and flash flood situations, data is required at high spatial and sub-daily resolution which is often not available in sufficient density and over extended time periods. The ClimEx project (Climate Change and Hydrological Extreme Events) addresses the alteration of hydrological extreme events under climate change conditions. In order to differentiate between a clear climate change signal and the limits of natural variability, unique Single-Model Regional Climate Model Ensembles (CRCM5 driven by CanESM2, RCP8.5) were created for a European and North-American domain, each comprising 50 members of 150 years (1951-2100). In combination with the CORDEX-Database, this newly created ClimEx-Ensemble is a one-of-a-kind model dataset to analyze changes of sub-daily extreme events. For the purpose of bias-correcting the regional climate model ensembles as well as for the baseline calibration and validation of hydrological catchment models, a new sub-daily (3h) high-resolution (500m) gridded reference dataset (SDCLIREF) was created for a domain covering the Upper Danube and Main watersheds ( 100.000km2). As the sub-daily observations lack a continuous time series for the reference period 1980-2010, the need for a suitable method to bridge the gap of the discontinuous time series arouse. The Method of Fragments (Sharma and Srikanthan (2006); Westra et al. (2012)) was applied to transform daily observations to sub-daily rainfall events to extend the time series and densify the station network. Prior to applying the Method of Fragments and creating the gridded dataset using rigorous interpolation routines, data collection of observations, operated by several institutions in three countries (Germany, Austria, Switzerland), and the subsequent quality control of the observations was carried out. Among others, the quality control checked for steps, extensive dry seasons, temporal consistency and maximum hourly values. The resulting SDCLIREF dataset provides a robust precipitation reference for hydrometeorological applications in unprecedented high spatio-temporal resolution. References: Sharma, A.; Srikanthan, S. (2006): Continuous Rainfall Simulation: A Nonparametric Alternative. In: 30th Hydrology and Water Resources Symposium 4-7 December 2006, Launceston, Tasmania. Westra, S.; Mehrotra, R.; Sharma, A.; Srikanthan, R. (2012): Continuous rainfall simulation. 1. A regionalized subdaily disaggregation approach. In: Water Resour. Res. 48 (1). DOI: 10.1029/2011WR010489.

  13. Characterization of reference genes for qPCR analysis in various tissues of the Fujian oyster Crassostrea angulata

    NASA Astrophysics Data System (ADS)

    Pu, Fei; Yang, Bingye; Ke, Caihuan

    2015-07-01

    Accurate quantification of transcripts using quantitative real-time polymerase chain reaction (qPCR) depends on the identification of reliable reference genes for normalization. This study aimed to identify and validate seven reference genes, including actin-2 ( ACT-2), elongation factor 1 alpha ( EF-1α), elongation factor 1 beta ( EF-1β), glyceraldehyde-3-phosphate dehydrogenase ( GAPDH), ubiquitin ( UBQ), β-tubulin ( β-TUB), and 18S ribosomal RNA, from Crassostrea angulata, a valuable marine bivalve cultured worldwide. Transcript levels of the candidate reference genes were examined using qPCR analysis and showed differential expression patterns in the mantle, gill, adductor muscle, labial palp, visceral mass, hemolymph and gonad tissues. Quantitative data were analyzed using the geNorm software to assess the expression stability of the candidate reference genes, revealing that β-TUB and UBQ were the most stable genes. The commonly used GAPDH and 18S rRNA showed low stability, making them unsuitable candidates in this system. The expression pattern of the G protein β-subunit gene ( Gβ) across tissue types was also examined and normalized to the expression of each or both of UBQ and β-TUB as internal controls. This revealed consistent trends with all three normalization approaches, thus validating the reliability of UBQ and β-TUB as optimal internal controls. The study provides the first validated reference genes for accurate data normalization in transcript profiling in Crassostrea angulata, which will be indispensable for further functional genomics studies in this economically valuable marine bivalve.

  14. Development of genic-SSR markers by deep transcriptome sequencing in pigeonpea [Cajanus cajan (L.) Millspaugh].

    PubMed

    Dutta, Sutapa; Kumawat, Giriraj; Singh, Bikram P; Gupta, Deepak K; Singh, Sangeeta; Dogra, Vivek; Gaikwad, Kishor; Sharma, Tilak R; Raje, Ranjeet S; Bandhopadhya, Tapas K; Datta, Subhojit; Singh, Mahendra N; Bashasab, Fakrudin; Kulwal, Pawan; Wanjari, K B; K Varshney, Rajeev; Cook, Douglas R; Singh, Nagendra K

    2011-01-20

    Pigeonpea [Cajanus cajan (L.) Millspaugh], one of the most important food legumes of semi-arid tropical and subtropical regions, has limited genomic resources, particularly expressed sequence based (genic) markers. We report a comprehensive set of validated genic simple sequence repeat (SSR) markers using deep transcriptome sequencing, and its application in genetic diversity analysis and mapping. In this study, 43,324 transcriptome shotgun assembly unigene contigs were assembled from 1.696 million 454 GS-FLX sequence reads of separate pooled cDNA libraries prepared from leaf, root, stem and immature seed of two pigeonpea varieties, Asha and UPAS 120. A total of 3,771 genic-SSR loci, excluding homopolymeric and compound repeats, were identified; of which 2,877 PCR primer pairs were designed for marker development. Dinucleotide was the most common repeat motif with a frequency of 60.41%, followed by tri- (34.52%), hexa- (2.62%), tetra- (1.67%) and pentanucleotide (0.76%) repeat motifs. Primers were synthesized and tested for 772 of these loci with repeat lengths of ≥ 18 bp. Of these, 550 markers were validated for consistent amplification in eight diverse pigeonpea varieties; 71 were found to be polymorphic on agarose gel electrophoresis. Genetic diversity analysis was done on 22 pigeonpea varieties and eight wild species using 20 highly polymorphic genic-SSR markers. The number of alleles at these loci ranged from 4-10 and the polymorphism information content values ranged from 0.46 to 0.72. Neighbor-joining dendrogram showed distinct separation of the different groups of pigeonpea cultivars and wild species. Deep transcriptome sequencing of the two parental lines helped in silico identification of polymorphic genic-SSR loci to facilitate the rapid development of an intra-species reference genetic map, a subset of which was validated for expected allelic segregation in the reference mapping population. We developed 550 validated genic-SSR markers in pigeonpea using deep transcriptome sequencing. From these, 20 highly polymorphic markers were used to evaluate the genetic relationship among species of the genus Cajanus. A comprehensive set of genic-SSR markers was developed as an important genomic resource for diversity analysis and genetic mapping in pigeonpea.

  15. Development of genic-SSR markers by deep transcriptome sequencing in pigeonpea [Cajanus cajan (L.) Millspaugh

    PubMed Central

    2011-01-01

    Background Pigeonpea [Cajanus cajan (L.) Millspaugh], one of the most important food legumes of semi-arid tropical and subtropical regions, has limited genomic resources, particularly expressed sequence based (genic) markers. We report a comprehensive set of validated genic simple sequence repeat (SSR) markers using deep transcriptome sequencing, and its application in genetic diversity analysis and mapping. Results In this study, 43,324 transcriptome shotgun assembly unigene contigs were assembled from 1.696 million 454 GS-FLX sequence reads of separate pooled cDNA libraries prepared from leaf, root, stem and immature seed of two pigeonpea varieties, Asha and UPAS 120. A total of 3,771 genic-SSR loci, excluding homopolymeric and compound repeats, were identified; of which 2,877 PCR primer pairs were designed for marker development. Dinucleotide was the most common repeat motif with a frequency of 60.41%, followed by tri- (34.52%), hexa- (2.62%), tetra- (1.67%) and pentanucleotide (0.76%) repeat motifs. Primers were synthesized and tested for 772 of these loci with repeat lengths of ≥18 bp. Of these, 550 markers were validated for consistent amplification in eight diverse pigeonpea varieties; 71 were found to be polymorphic on agarose gel electrophoresis. Genetic diversity analysis was done on 22 pigeonpea varieties and eight wild species using 20 highly polymorphic genic-SSR markers. The number of alleles at these loci ranged from 4-10 and the polymorphism information content values ranged from 0.46 to 0.72. Neighbor-joining dendrogram showed distinct separation of the different groups of pigeonpea cultivars and wild species. Deep transcriptome sequencing of the two parental lines helped in silico identification of polymorphic genic-SSR loci to facilitate the rapid development of an intra-species reference genetic map, a subset of which was validated for expected allelic segregation in the reference mapping population. Conclusion We developed 550 validated genic-SSR markers in pigeonpea using deep transcriptome sequencing. From these, 20 highly polymorphic markers were used to evaluate the genetic relationship among species of the genus Cajanus. A comprehensive set of genic-SSR markers was developed as an important genomic resource for diversity analysis and genetic mapping in pigeonpea. PMID:21251263

  16. The ANACONDA algorithm for deformable image registration in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weistrand, Ola; Svensson, Stina, E-mail: stina.svensson@raysearchlabs.com

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularizationmore » term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA performs well in comparison with other algorithms. By including CT/CBCT data in the validation, the various aspects of the algorithm such as its ability to handle different modalities, large deformations, and air pockets are shown.« less

  17. Development of a multi-sensor elevation time series pole-ward of 86°S in support of altimetry validation and ice sheet mass balance studies

    NASA Astrophysics Data System (ADS)

    Studinger, M.; Brunt, K. M.; Casey, K.; Medley, B.; Neumann, T.; Manizade, S.; Linkswiler, M. A.

    2015-12-01

    In order to produce a cross-calibrated long-term record of ice-surface elevation change for input into ice sheet models and mass balance studies it is necessary to "link the measurements made by airborne laser altimeters, satellite measurements of ICESat, ICESat-2, and CryoSat-2" [IceBridge Level 1 Science Requirements, 2012] and determine the biases and the spatial variations between radar altimeters and laser altimeters using different wavelengths. The convergence zones of all ICESat tracks (86°S) and all ICESat-2 and CryoSat-2 tracks (88°S) are in regions of relatively low accumulation, making them ideal for satellite altimetry calibration. In preparation for ICESat-2 validation, the IceBridge and ICESat-2 science teams have designed IceBridge data acquisitions around 86°S and 88°S. Several aspects need to be considered when comparing and combining elevation measurements from different radar and laser altimeters, including: a) foot print size and spatial sampling pattern; b) accuracy and precision of each data sets; c) varying signal penetration into the snow; and d) changes in geodetic reference frames over time, such as the International Terrestrial Reference Frame (ITRF). The presentation will focus on the analysis of several IceBridge flights around 86 and 88°S with the LVIS and ATM airborne laser altimeters and will evaluate the accuracy and precision of these data sets. To properly interpret the observed elevation change (dh/dt) as mass change, however, the various processes that control surface elevation fluctuations must be quantified and therefore future work will quantify the spatial variability in snow accumulation rates pole-ward of 86°S and in particular around 88°S. Our goal is to develop a cross-validated multi-sensor time series of surface elevation change pole-ward of 86°S that, in combination with measured accumulation rates, will support ICESat-2 calibration and validation and ice sheet mass balance studies.

  18. Regional variability in the accuracy of statistical reproductions of historical time series of daily streamflow at ungaged locations

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Archfield, S. A.; Over, T. M.; Kiang, J. E.

    2015-12-01

    In the United States and across the globe, the majority of stream reaches and rivers are substantially impacted by water use or remain ungaged. The result is large gaps in the availability of natural streamflow records from which to infer hydrologic understanding and inform water resources management. From basin-specific to continent-wide scales, many efforts have been undertaken to develop methods to estimate ungaged streamflow. This work applies and contrasts several statistical models of daily streamflow to more than 1,700 reference-quality streamgages across the conterminous United States using a cross-validation methodology. The variability of streamflow simulation performance across the country exhibits a pattern familiar to other continental scale modeling efforts performed for the United States. For portions of the West Coast and the dense, relatively homogeneous and humid regions of the eastern United States models produce reliable estimates of daily streamflow using many different prediction methods. Model performance for the middle portion of the United States, marked by more heterogeneous and arid conditions, and with larger contributing areas and sparser networks of streamgages, is consistently poor. A discussion of the difficulty of statistical interpolation and regionalization in these regions raises additional questions of data availability and quality, hydrologic process representation and dominance, and intrinsic variability.

  19. U.S. residential consumer product information: Validation of methods for post-stratification weighting of Amazon Mechanical Turk surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenblatt, Jeffery B.; Yang, Hung-Chia; Desroches, Louis-Benoit

    2013-04-01

    We present two post-stratification weighting methods to validate survey data collected using Amazon Mechanical Turk (AMT). Two surveys focused on appliance and consumer electronics devices were administered in the spring and summer of 2012 to each of approximately 3,000 U.S. households. Specifically, the surveys asked questions about residential refrigeration products, televisions (TVs) and set-top boxes (STBs). Filtered data were assigned weights using each of two weighting methods, termed “sequential” and “simultaneous,” by examining up to eight demographic variables (income, education, gender, race, Hispanic origin, number of occupants, ages of occupants, and geographic region) in comparison to reference U.S. demographic datamore » from the 2009 Residential Energy Consumption Survey (RECS). Five key questions from the surveys (number of refrigerators, number of freezers, number of TVs, number of STBs and primary service provider) were evaluated with a set of statistical tests to determine whether either method improved the agreement of AMT with reference data, and if so, which method was better. The statistical tests used were: differences in proportions, distributions of proportions (using Pearson’s chi-squared test), and differences in average numbers of devices as functions of all demographic variables. The results indicated that both methods generally improved the agreement between AMT and reference data, sometimes greatly, but that the simultaneous method was usually superior to the sequential method. Some differences in sample populations were found between the AMT and reference data. Differences in the proportion of STBs reflected large changes in the STB market since the time our reference data was acquired in 2009. Differences in the proportions of some primary service providers suggested real sample bias, with the possible explanation that AMT user are more likely to subscribe to providers who also provide home internet service. Differences in other variables, while statistically significant in some cases, were nonetheless considered to be minor. Depending on the intended purpose of the data collected using AMT, these biases may or may not be important; to correct them, additional questions and/or further post-survey adjustments could be employed. In general, based on the analysis methods and the sample datasets used in this study, AMT surveys appeared to provide useful data on appliance and consumer electronics devices.« less

  20. Adolescent Populations Research Needs - NCS Dietary Assessment Literature Review

    Cancer.gov

    As with school age children, it is difficult to make conclusions about the validity of available dietary assessment instruments for adolescents because of the differences in instruments, research designs, reference methods, and populations in the validation literature.

  1. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  2. Validating long-term satellite-derived disturbance products: the case of burned areas

    NASA Astrophysics Data System (ADS)

    Boschetti, L.; Roy, D. P.

    2015-12-01

    The potential research, policy and management applications of satellite products place a high priority on providing statements about their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Climate Data Record. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted that is characterized by the selection of reference data via a probability sampling that can subsequently be used to compute accuracy metrics, taking into account the sampling probability. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for the validation of global products that are highly variable in time and space (e.g. snow, floods or other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost of collecting independent reference data. We propose a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space. To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn by probability sampling. The proposed sampling design is a stratified random sampling, with two-level stratification of the voxels based on biomes and fire activity (Figure 1). The novel validation approach, used for the validation of the MODIS and forthcoming VIIRS global burned area products, is a general one, and could be used for the validation of other global products that are highly variable in space and time and is required to assess the accuracy of climate records. The approach is demonstrated using a 1 year dataset of MODIS fire products.

  3. Validation of the MODIS Collection 6 MCD64 Global Burned Area Product

    NASA Astrophysics Data System (ADS)

    Boschetti, L.; Roy, D. P.; Giglio, L.; Stehman, S. V.; Humber, M. L.; Sathyachandran, S. K.; Zubkova, M.; Melchiorre, A.; Huang, H.; Huo, L. Z.

    2017-12-01

    The research, policy and management applications of satellite products place a high priority on rigorously assessing their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Essential Climate Variable. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted, characterized by the selection of reference data via probability sampling. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for other products that are highly variable in time and space (e.g. snow, floods, other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost and limited availability of independent reference data. This paper describes the validation procedure adopted for the latest Collection 6 version of the MODIS Global Burned Area product (MCD64, Giglio et al, 2009). We used a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space (Boschetti et al, 2016). To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn through stratified random sampling. The novel sampling approach was used for the selection of a reference dataset consisting of 700 Landsat 8 image pairs, interpreted according to the CEOS Burned Area Validation Protocol (Boschetti et al., 2009). Standard quantitative burned area product accuracy measures that are important for different types of fire users (Boschetti et al, 2016, Roy and Boschetti, 2009, Boschetti et al, 2004) are computed to characterize the accuracy of the MCD64 product.

  4. Statistical Analyses of Brain Surfaces Using Gaussian Random Fields on 2-D Manifolds

    PubMed Central

    Staib, Lawrence H.; Xu, Dongrong; Zhu, Hongtu; Peterson, Bradley S.

    2008-01-01

    Interest in the morphometric analysis of the brain and its subregions has recently intensified because growth or degeneration of the brain in health or illness affects not only the volume but also the shape of cortical and subcortical brain regions, and new image processing techniques permit detection of small and highly localized perturbations in shape or localized volume, with remarkable precision. An appropriate statistical representation of the shape of a brain region is essential, however, for detecting, localizing, and interpreting variability in its surface contour and for identifying differences in volume of the underlying tissue that produce that variability across individuals and groups of individuals. Our statistical representation of the shape of a brain region is defined by a reference region for that region and by a Gaussian random field (GRF) that is defined across the entire surface of the region. We first select a reference region from a set of segmented brain images of healthy individuals. The GRF is then estimated as the signed Euclidean distances between points on the surface of the reference region and the corresponding points on the corresponding region in images of brains that have been coregistered to the reference. Correspondences between points on these surfaces are defined through deformations of each region of a brain into the coordinate space of the reference region using the principles of fluid dynamics. The warped, coregistered region of each subject is then unwarped into its native space, simultaneously bringing into that space the map of corresponding points that was established when the surfaces of the subject and reference regions were tightly coregistered. The proposed statistical description of the shape of surface contours makes no assumptions, other than smoothness, about the shape of the region or its GRF. The description also allows for the detection and localization of statistically significant differences in the shapes of the surfaces across groups of subjects at both a fine and coarse scale. We demonstrate the effectiveness of these statistical methods by applying them to study differences in shape of the amygdala and hippocampus in a large sample of normal subjects and in subjects with attention deficit/hyperactivity disorder (ADHD). PMID:17243583

  5. The Contribution of CEOP Data to the Understanding and Modeling of Monsoon Systems

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.

    2005-01-01

    CEOP has contributed and will continue to provide integrated data sets from diverse platforms for better understanding of the water and energy cycles, and for validaintg models. In this talk, I will show examples of how CEOP has contributed to the formulation of a strategy for the study of the monsoon as a system. The CEOP data concept has led to the development of the CEOP Inter-Monsoon Studies (CIMS), which focuses on the identification of model bias, and improvement of model physics such as the diurnal and annual cycles. A multi-model validation project focusing on diurnal variability of the East Asian monsoon, and using CEOP reference site data, as well as CEOP integrated satellite data is now ongoing. Preliminary studies show that climate models have difficulties in simulating the diurnal signals of total rainfall, rainfall intensity and frequency of occurrence, which have different peak hours, depending on locations. Further more model diurnal cycle of rainfall in monsoon regions tend to lead the observed by about 2-3 hours. These model bias offer insight into lack of, or poor representation of, key components of the convective and stratiform rainfall. The CEOP data also stimulated studies to compare and contrasts monsoon variability in different parts of the world. It was found that seasonal wind reversal, orographic effects, monsoon depressions, meso-scale convective complexes, SST and land surface land influences are common features in all monsoon regions. Strong intraseasonal variability is present in all monsoon regions. While there is a clear demarcation of onset, breaks and withdrawal in the Asian and Australian monsoon region associated with climatological intraseasonal variabillity, it is less clear in the American and Africa monsoon regions. The examination of satellite and reference site data in monsoon has led to preliminary model experiments to study the impact of aerosol on monsoon variability. I will show examples of how the study of the dynamics of aerosol-water cycle interactions in the monsoon region, can be best achieved using the CEOP data and modeling strategy.

  6. Multi-institutional Quantitative Evaluation and Clinical Validation of Smart Probabilistic Image Contouring Engine (SPICE) Autosegmentation of Target Structures and Normal Tissues on Computer Tomography Images in the Head and Neck, Thorax, Liver, and Male Pelvis Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Mingyao; Bzdusek, Karl; Brink, Carsten

    2013-11-15

    Purpose: Clinical validation and quantitative evaluation of computed tomography (CT) image autosegmentation using Smart Probabilistic Image Contouring Engine (SPICE). Methods and Materials: CT images of 125 treated patients (32 head and neck [HN], 40 thorax, 23 liver, and 30 prostate) in 7 independent institutions were autosegmented using SPICE and computational times were recorded. The number of structures autocontoured were 25 for the HN, 7 for the thorax, 3 for the liver, and 6 for the male pelvis regions. Using the clinical contours as reference, autocontours of 22 selected structures were quantitatively evaluated using Dice Similarity Coefficient (DSC) and Mean Slice-wisemore » Hausdorff Distance (MSHD). All 40 autocontours were evaluated by a radiation oncologist from the institution that treated the patients. Results: The mean computational times to autosegment all the structures using SPICE were 3.1 to 11.1 minutes per patient. For the HN region, the mean DSC was >0.70 for all evaluated structures, and the MSHD ranged from 3.2 to 10.0 mm. For the thorax region, the mean DSC was 0.95 for the lungs and 0.90 for the heart, and the MSHD ranged from 2.8 to 12.8 mm. For the liver region, the mean DSC was >0.92 for all structures, and the MSHD ranged from 5.2 to 15.9 mm. For the male pelvis region, the mean DSC was >0.76 for all structures, and the MSHD ranged from 4.8 to 10.5 mm. Out of the 40 autocontoured structures reviews by experts, 25 were scored useful as autocontoured or with minor edits for at least 90% of the patients and 33 were scored useful autocontoured or with minor edits for at least 80% of the patients. Conclusions: Compared with manual contouring, autosegmentation using SPICE for the HN, thorax, liver, and male pelvis regions is efficient and shows significant promise for clinical utility.« less

  7. Emission Computed Tomography: A New Technique for the Quantitative Physiologic Study of Brain and Heart in Vivo

    DOE R&D Accomplishments Database

    Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Schelbert, H. R.; Kuhl, D. E.

    1978-01-01

    Emission computed tomography can provide a quantitative in vivo measurement of regional tissue radionuclide tracer concentrations. This facility when combined with physiologic models and radioactively labeled physiologic tracers that behave in a predictable manner allow measurement of a wide variety of physiologic variables. This integrated technique has been referred to as Physiologic Tomography (PT). PT requires labeled compounds which trace physiologic processes in a known and predictable manner, and physiologic models which are appropriately formulated and validated to derive physiologic variables from ECT data. In order to effectively achieve this goal, PT requires an ECT system that is capable of performing truly quantitative or analytical measurements of tissue tracer concentrations and which has been well characterized in terms of spatial resolution, sensitivity and signal to noise ratios in the tomographic image. This paper illustrates the capabilities of emission computed tomography and provides examples of physiologic tomography for the regional measurement of cerebral and myocardial metabolic rate for glucose, regional measurement of cerebral blood volume, gated cardiac blood pools and capillary perfusion in brain and heart. Studies on patients with stroke and myocardial ischemia are also presented.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucourt, Maximilian de, E-mail: mdb@charite.de; Streitparth, Florian, E-mail: florian.streitparth@charite.de; Collettini, Federico

    Purpose: To evaluate the feasibility of minimally invasive magnetic resonance imaging (MRI)-guided free-hand aspiration of symptomatic nerve route compressing lumbosacral cysts in a 1.0-Tesla (T) open MRI system using a tailored interactive sequence. Materials and Methods: Eleven patients with MRI-evident symptomatic cysts in the lumbosacral region and possible nerve route compressing character were referred to a 1.0-T open MRI system. For MRI interventional cyst aspiration, an interactive sequence was used, allowing for near real-time position validation of the needle in any desired three-dimensional plane. Results: Seven of 11 cysts in the lumbosacral region were successfully aspirated (average 10.1 mm [SDmore » {+-} 1.9]). After successful cyst aspiration, each patient reported speedy relief of initial symptoms. Average cyst size was 9.6 mm ({+-}2.6 mm). Four cysts (8.8 {+-} 3.8 mm) could not be aspirated. Conclusion: Open MRI systems with tailored interactive sequences have great potential for cyst aspiration in the lumbosacral region. The authors perceive major advantages of the MR-guided cyst aspiration in its minimally invasive character compared to direct and open surgical options along with consecutive less trauma, less stress, and also less side-effects for the patient.« less

  9. Development of a benthic multimetric index for the Serra da Bocaina bioregion in Southeast Brazil.

    PubMed

    Baptista, D F; Henriques-Oliveira, A L; Oliveira, R B S; Mugnai, R; Nessimian, J L; Buss, D F

    2013-08-01

    Brazil faces a challenge to develop biomonitoring tools to be used in water quality assessment programs, but few multimetric indices were developed so far. This study is part of an effort to test and implement programs using benthic macroinvertebrates as bioindicators in Rio de Janeiro State. Our aim was first to test the Multimetric Index for Serra dos Órgãos (SOMI) for a different area--Serra da Bocaina (SB)--in the same ecoregion. We sampled 27 streams of different sizes and altitudes in the SB region. Despite the environmental similarities, results indicated biological differences between reference sites of the two regions. Considering these differences, we decided to develop an index specific for the SB region, the Serra da Bocaina Multimetric Index (MISB). We tested twenty-two metrics for sensitivity to impairment and redundancy, and six metrics were considered valid to integrate the MISB: Family Richness, Trichoptera Richness, % Coleoptera, % Diptera, IBE-IOC index, EPT / Chironomidae ratio. A test of the MISB in eleven sites indicated it was more related to land-use and water physico-chemical parameters than with altitude or stream width, being a useful tool for the monitoring and assessment of streams in the bioregion.

  10. Global Core Plasma Model

    NASA Technical Reports Server (NTRS)

    Gallagher, Dennis L.; Craven, Paul D.; Comfort, Richard H.

    1999-01-01

    Over 40 years of ground and spacecraft plasmaspheric measurements have resulted in many statistical descriptions of plasmaspheric properties. In some cases, these properties have been represented as analytical descriptions that are valid for specific regions or conditions. For the most part, what has not been done is to extend regional empirical descriptions or models to the plasmasphere as a whole. In contrast, many related investigations depend on the use of representative plasmaspheric conditions throughout the inner magnetosphere. Wave propagation, involving the transport of energy through the magnetosphere, is strongly affected by thermal plasma density and its composition. Ring current collisional and wave particle losses also strongly depend on these quantities. Plasmaspheric also plays a secondary role in influencing radio signals from the Global Positioning System satellites. The Global Core Plasma Model (GCPM) is an attempt to assimilate previous empirical evidence and regional models for plasmaspheric density into a continuous, smooth model of thermal plasma density in the inner magnetosphere. In that spirit, the International Reference Ionosphere is currently used to complete the low altitude description of density and composition in the model. The models and measurements on which the GCPM is currently based and its relationship to IRI will be discussed.

  11. Final report on SIM Regional Key Comparison SIM.L-K1.2007: Calibration of gauge blocks by optical interferometry

    NASA Astrophysics Data System (ADS)

    Colín, C.; Viliesid, M.; Chaudhary, K. P.; Decker, J.; Dvorácek, F.; Franca, R.; Ilieff, S.; Rodríguez, J.; Stoup, J.

    2012-01-01

    This Key Comparison of gauge blocks (GB) calibration by optical interferometry was carried out to support this Calibration and Measurement Capability (CMC) of the National Measurement Institutes (NMI) from the SIM Region for this specific service and for those that rely on this kind of technique as required by the Mutual Recognition Arrangement (MRA). It provides evidence of the participant's technical competence and supports the uncertainties they state in their CMC. It is a Regional Key Comparison and should be linked to the upper level corresponding comparison CCL-K1. The comparison had nine participants, five from the SIM Region—NRC-CNRC, Canada; NIST, USA; CENAM, Mexico; INMETRO, Brazil; and INTI, Argentina— and four from other regions—CMI, Czech Rep.; CEM, Spain; NPLI, India; and NMISA South Africa. It included the circulation of fourteen GB—seven steel GB and seven ceramic GB. The circulation of the artifacts started on 2007-11-01 and ended on 2010-04-25. Some additional time was required to publish the results as the same artifacts were used thereafter for comparison SIM.L-S6, Calibration of GB by mechanical comparison, and the results could not be disclosed until the participants of the second circulation loop had sent their results. The final report of this comparison was sent out for review in May 2012 and the final version was approved in August 2012. The behavior of the artifacts throughout the circulation was good and therefore the results obtained were judged technically valid. The reference value was taken as arithmetic mean of the largest subset of consistent results. Most of participates obtained results in good agreement with the reference values with a few exceptions mentioned in the report. The corresponding NMIs are responsible for identifying the causes and taking corrective action. This makes the present comparison exercise valid to support the CMC claims of the participants in GB calibration by optical interferometry. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCL, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  12. [Design and validation of a questionnaire for psychosocial nursing diagnosis in Primary Care].

    PubMed

    Brito-Brito, Pedro Ruymán; Rodríguez-Álvarez, Cristobalina; Sierra-López, Antonio; Rodríguez-Gómez, José Ángel; Aguirre-Jaime, Armando

    2012-01-01

    To develop a valid, reliable and easy-to-use questionnaire for a psychosocial nursing diagnosis. The study was performed in two phases: first phase, questionnaire design and construction; second phase, validity and reliability tests. A bank of items was constructed using the NANDA classification as a theoretical framework. Each item was assigned a Likert scale or dichotomous response. The combination of responses to the items constituted the diagnostic rules to assign up to 28 labels. A group of experts carried out the validity test for content. Other validated scales were used as reference standards for the criterion validity tests. Forty-five nurses provided the questionnaire to the patients on three separate occasions over a period of three weeks, and the other validated scales only once to 188 randomly selected patients in Primary Care centres in Tenerife (Spain). Validity tests for construct confirmed the six dimensions of the questionnaire with 91% of total variance explained. Validity tests for criterion showed a specificity of 66%-100%, and showed high correlations with the reference scales when the questionnaire was assigning nursing diagnoses. Reliability tests showed agreement of 56%-91% (P<.001), and a 93% internal consistency. The Questionnaire for Psychosocial Nursing Diagnosis was called CdePS, and included 61 items. The CdePS is a valid, reliable and easy-to-use tool in Primary Care centres to improve the assigning of a psychosocial nursing diagnosis. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  13. Behavior of variable V3 region from 16S rDNA of lactic acid bacteria in denaturing gradient gel electrophoresis.

    PubMed

    Ercolini, D; Moschetti, G; Blaiotta, G; Coppola, S

    2001-03-01

    Separation of amplified V3 region from 16S rDNA by denaturing gradient gel electrophoresis (DGGE) was tested as a tool for differentiation of lactic acid bacteria commonly isolated from food. Variable V3 regions of 21 reference strains and 34 wild strains referred to species belonging to the genera Pediococcus, Enterococcus, Lactococcus, Lactobacillus, Leuconostoc, Weissella, and Streptococcus were analyzed. DGGE profiles obtained were species-specific for most of the cultures tested. Moreover, it was possible to group the remaining LAB reference strains according to the migration of their 16S V3 region in the denaturing gel. The results are discussed with reference to their potential in the analysis of LAB communities in food, besides shedding light on taxonomic aspects.

  14. Poor symptom and performance validity in regularly referred Hospital outpatients: Link with standard clinical measures, and role of incentives.

    PubMed

    Dandachi-FitzGerald, Brechje; van Twillert, Björn; van de Sande, Peter; van Os, Yindee; Ponds, Rudolf W H M

    2016-05-30

    We investigated the frequency of symptom validity test (SVT) failure and its clinical correlates in a large, heterogeneous sample of hospital outpatients referred for psychological assessment for clinical purposes. We studied patients (N=469), who were regularly referred for assessment to the psychology departments of five hospitals. Background characteristics, including information about incentives, were obtained with a checklist completed by the clinician. As a measure of over-reporting, the Structured Inventory of Malingered Symptomatology (SIMS) was administered to all patients. The Amsterdam Short-Term Memory test (ASTM), a cognitive underperformance measure, was only administered to patients who were referred for a neuropsychological assessment. Symptom over-reporting occurred in a minority of patients, ranging from 12% to 19% in the main diagnostic patient groups. Patients with morbid obesity had a low rate of over-reporting (1%). The SIMS was positively associated with levels of self-reported psychological symptoms. Cognitive underperformance occurred in 29.3% of the neuropsychological assessments. The ASTM was negatively associated with memory test performance. We found no association between SVT failure and financial incentives. Our results support the recommendation to routinely evaluate symptom validity in clinical assessments of hospital patients. The dynamics behind invalid symptom reporting need to be further elucidated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Validity of the Acti4 software using ActiGraph GT3X+accelerometer for recording of arm and upper body inclination in simulated work tasks.

    PubMed

    Korshøj, Mette; Skotte, Jørgen H; Christiansen, Caroline S; Mortensen, Pelle; Kristiansen, Jesper; Hanisch, Christiana; Ingebrigtsen, Jørgen; Holtermann, Andreas

    2014-01-01

    The validity of inclinometer measurements by ActiGraph GT3X+ (AG) accelerometer, when analysed with the Acti4 customised software, was examined by comparison of inclinometer measurements with a reference system (TrakStar) in a protocol with standardised arm movements and simulated working tasks. The sensors were placed at the upper arm (distal to the deltoid insertion) and at the spine (level of T1-T2) on eight participants. Root mean square errors (RMSEs) values of inclination between the two systems were low for the slow- and medium-speed standardised arm movements and in simulated working tasks. Fast arm movements caused the inclination estimated by the AG to deviate from the reference measurements (RMSE values up to ∼10°). Furthermore, it was found that AG positioned at the upper arm provided inclination data without bias compared to the reference system. These findings indicate that the AG provides valid estimates of arm and upper body inclination in working participants. Being inexpensive, small, water-resistant and without wires, ActiGraph GT3X+ seems to be a valid mean for direct long-term field measurements of arm and trunk inclinations when analysed by the Acti4 customised software.

  16. Development and validation of a method for mercury determination in seawater for the process control of a candidate certified reference material.

    PubMed

    Sánchez, Raquel; Snell, James; Held, Andrea; Emons, Hendrik

    2015-08-01

    A simple, robust and reliable method for mercury determination in seawater matrices based on the combination of cold vapour generation and inductively coupled plasma mass spectrometry (CV-ICP-MS) and its complete in-house validation are described. The method validation covers parameters such as linearity, limit of detection (LOD), limit of quantification (LOQ), trueness, repeatability, intermediate precision and robustness. A calibration curve covering the whole working range was achieved with coefficients of determination typically higher than 0.9992. The repeatability of the method (RSDrep) was 0.5 %, and the intermediate precision was 2.3 % at the target mass fraction of 20 ng/kg. Moreover, the method was robust with respect to the salinity of the seawater. The limit of quantification was 2.7 ng/kg, which corresponds to 13.5 % of the target mass fraction in the future certified reference material (20 ng/kg). An uncertainty budget for the measurement of mercury in seawater has been established. The relative expanded (k = 2) combined uncertainty is 6 %. The performance of the validated method was demonstrated by generating results for process control and a homogeneity study for the production of a candidate certified reference material.

  17. Discovering transcription factor binding sites in highly repetitive regions of genomes with multi-read analysis of ChIP-Seq data.

    PubMed

    Chung, Dongjun; Kuan, Pei Fen; Li, Bo; Sanalkumar, Rajendran; Liang, Kun; Bresnick, Emery H; Dewey, Colin; Keleş, Sündüz

    2011-07-01

    Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) is rapidly replacing chromatin immunoprecipitation combined with genome-wide tiling array analysis (ChIP-chip) as the preferred approach for mapping transcription-factor binding sites and chromatin modifications. The state of the art for analyzing ChIP-seq data relies on using only reads that map uniquely to a relevant reference genome (uni-reads). This can lead to the omission of up to 30% of alignable reads. We describe a general approach for utilizing reads that map to multiple locations on the reference genome (multi-reads). Our approach is based on allocating multi-reads as fractional counts using a weighted alignment scheme. Using human STAT1 and mouse GATA1 ChIP-seq datasets, we illustrate that incorporation of multi-reads significantly increases sequencing depths, leads to detection of novel peaks that are not otherwise identifiable with uni-reads, and improves detection of peaks in mappable regions. We investigate various genome-wide characteristics of peaks detected only by utilization of multi-reads via computational experiments. Overall, peaks from multi-read analysis have similar characteristics to peaks that are identified by uni-reads except that the majority of them reside in segmental duplications. We further validate a number of GATA1 multi-read only peaks by independent quantitative real-time ChIP analysis and identify novel target genes of GATA1. These computational and experimental results establish that multi-reads can be of critical importance for studying transcription factor binding in highly repetitive regions of genomes with ChIP-seq experiments.

  18. Reference-free spectroscopic determination of fat and protein in milk in the visible and near infrared region below 1000nm using spatially resolved diffuse reflectance fiber probe.

    PubMed

    Bogomolov, Andrey; Belikova, Valeria; Galyanin, Vladislav; Melenteva, Anastasiia; Meyer, Hans

    2017-05-15

    New technique of diffuse reflectance spectroscopic analysis of milk fat and total protein content in the visible (Vis) and adjacent near infrared (NIR) region (400-995nm) has been developed and tested. Sample analysis was performed through a probe having eight 200-µm fiber channels forming a linear array. One of the end fibers was used for the illumination and other seven - for the spectroscopic detection of diffusely reflected light. One of the detection channels was used as a reference to normalize the spectra and to convert them into absorbance-equivalent units. The method has been tested experimentally using a designed sample set prepared from industrial raw milk standards with widely varying fat and protein content. To increase the modelling robustness all milk samples were measured in three different homogenization degrees. Comprehensive data analysis has shown the advantage of combining both spectral and spatial resolution in the same measurement and revealed the most relevant channels and wavelength regions. The modelling accuracy was further improved using joint variable selection and preprocessing optimization method based on the genetic algorithm. The root mean-square errors of different validation methods were below 0.10% for fat and below 0.08% for total protein content. Based on the present experimental data, it was computationally shown that the full-spectrum analysis in this method can be replaced by a sensor measurement at several specific wavelengths, for instance, using light-emitting diodes (LEDs) for illumination. Two optimal sensor configurations have been suggested: with nine LEDs for the analysis of fat and seven - for protein content. Both simulated sensors exhibit nearly the same component determination accuracy as corresponding full-spectrum analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Spectral analysis of near-wall turbulence in channel flow at Reτ=4200 with emphasis on the attached-eddy hypothesis

    NASA Astrophysics Data System (ADS)

    Agostini, Lionel; Leschziner, Michael

    2017-01-01

    Direct numerical simulation data for channel flow at a friction Reynolds number of 4200, generated by Lozano-Durán and Jiménez [J. Fluid Mech. 759, 432 (2014), 10.1017/jfm.2014.575], are used to examine the properties of near-wall turbulence within subranges of eddy-length scale. Attention is primarily focused on the intermediate layer (mesolayer) covering the logarithmic velocity region within the range of wall-scaled wall-normal distance of 80-1500. The examination is based on a number of statistical properties, including premultiplied and compensated spectra, the premultiplied derivative of the second-order structure function, and three scalar parameters that characterize the anisotropic or isotropic state of the various length-scale subranges. This analysis leads to the delineation of three regions within the map of wall-normal-wise premultiplied spectra, each characterized by distinct turbulence properties. A question of particular interest is whether the Townsend-Perry attached-eddy hypothesis (AEH) can be shown to be valid across the entire mesolayer, in contrast to the usual focus on the outer portion of the logarithmic-velocity layer at high Reynolds numbers, which is populated with very-large-scale motions. This question is addressed by reference to properties in the premultiplied scalewise derivative of the second-order structure function (PMDS2) and joint probability density functions of streamwise-velocity fluctuations and their streamwise and spanwise derivatives. This examination provides evidence, based primarily on the existence of a plateau region in the PMDS2, for the qualified validity of the AEH right down the lower limit of the logarithmic velocity range.

  20. Modelization of highly nonlinear waves in coastal regions

    NASA Astrophysics Data System (ADS)

    Gouin, Maïté; Ducrozet, Guillaume; Ferrant, Pierre

    2015-04-01

    The proposed work deals with the development of a highly non-linear model for water wave propagation in coastal regions. The accurate modelization of surface gravity waves is of major interest in ocean engineering, especially in the field of marine renewable energy. These marine structures are intended to be settled in coastal regions where the effect of variable bathymetry may be significant on local wave conditions. This study presents a numerical model for the wave propagation with complex bathymetry. It is based on High-Order Spectral (HOS) method, initially limited to the propagation of non-linear wave fields over flat bottom. Such a model has been developed and validated at the LHEEA Lab. (Ecole Centrale Nantes) over the past few years and the current developments will enlarge its application range. This new numerical model will keep the interesting numerical properties of the original pseudo-spectral approach (convergence, efficiency with the use of FFTs, …) and enable the possibility to propagate highly non-linear wave fields over long time and large distance. Different validations will be provided in addition to the presentation of the method. At first, Bragg reflection will be studied with the proposed approach. If the Bragg condition is satisfied, the reflected wave generated by a sinusoidal bottom patch should be amplified as a result of resonant quadratic interactions between incident wave and bottom. Comparisons will be provided with experiments and reference solutions. Then, the method will be used to consider the transformation of a non-linear monochromatic wave as it propagates up and over a submerged bar. As the waves travel up the front slope of the bar, it steepens and high harmonics are generated due to non-linear interactions. Comparisons with experimental data will be provided. The different test cases will assess the accuracy and efficiency of the method proposed.

  1. Implementation of a Regional Virtual Tumor Board: A Prospective Study Evaluating Feasibility and Provider Acceptance

    PubMed Central

    Marshall, Christy L.; Petersen, Nancy J.; Naik, Aanand D.; Velde, Nancy Vander; Artinyan, Avo; Albo, Daniel; Berger, David H.

    2014-01-01

    Abstract Background: Tumor board (TB) conferences facilitate multidisciplinary cancer care and are associated with overall improved outcomes. Because of shortages of the oncology workforce and limited access to TB conferences, multidisciplinary care is not available at every institution. This pilot study assessed the feasibility and acceptance of using telemedicine to implement a virtual TB (VTB) program within a regional healthcare network. Materials and Methods: The VTB program was implemented through videoconference technology and electronic medical records between the Houston (TX) Veterans Affairs Medical Center (VAMC) (referral center) and the New Orleans (LA) VAMC (referring center). Feasibility was assessed as the proportion of completed VTB encounters, rate of technological failures/mishaps, and presentation duration. Validated surveys for confidence and satisfaction were administered to 36 TB participants to assess acceptance (1–5 point Likert scale). Secondary outcomes included preliminary data on VTB utilization and its effectiveness in providing access to quality cancer care within the region. Results: Ninety TB case presentations occurred during the study period, of which 14 (15%) were VTB cases. Although one VTB encounter had a technical mishap during presentation, all scheduled encounters were completed (100% completion rate). Case presentations took longer for VTB than for regular TB cases (p=0.0004). However, VTB was highly accepted with mean scores for satisfaction and confidence of 4.6. Utilization rate of VTB was 75%, and its effectiveness was equivalent to that observed for non-VTB cases. Conclusions: Implementation of VTB is feasible and highly accepted by its participants. Future studies should focus on widespread implementation and validating the effectiveness of this model. PMID:24845366

  2. Identification and validation of reference genes for quantitative real-time PCR studies in long yellow daylily, Hemerocallis citrina Borani

    USDA-ARS?s Scientific Manuscript database

    Gene expression analysis requires the use of reference genes in the target species. The long yellow daylily is rich in beneficial secondary metabolites and is considered as a functional vegetable. It is widely cultivated and consumed in East Asia. However, reference genes for use in RT-qPCR in this ...

  3. The characterization and certification of a quantitative reference material for Legionella detection and quantification by qPCR.

    PubMed

    Baume, M; Garrelly, L; Facon, J P; Bouton, S; Fraisse, P O; Yardin, C; Reyrolle, M; Jarraud, S

    2013-06-01

    The characterization and certification of a Legionella DNA quantitative reference material as a primary measurement standard for Legionella qPCR. Twelve laboratories participated in a collaborative certification campaign. A candidate reference DNA material was analysed through PCR-based limiting dilution assays (LDAs). The validated data were used to statistically assign both a reference value and an associated uncertainty to the reference material. This LDA method allowed for the direct quantification of the amount of Legionella DNA per tube in genomic units (GU) and the determination of the associated uncertainties. This method could be used for the certification of all types of microbiological standards for qPCR. The use of this primary standard will improve the accuracy of Legionella qPCR measurements and the overall consistency of these measurements among different laboratories. The extensive use of this certified reference material (CRM) has been integrated in the French standard NF T90-471 (April 2010) and in the ISO Technical Specification 12 869 (Anon 2012 International Standardisation Organisation) for validating qPCR methods and ensuring the reliability of these methods. © 2013 The Society for Applied Microbiology.

  4. Transcriptome-wide selection of a reliable set of reference genes for gene expression studies in potato cyst nematodes (Globodera spp.).

    PubMed

    Sabeh, Michael; Duceppe, Marc-Olivier; St-Arnaud, Marc; Mimee, Benjamin

    2018-01-01

    Relative gene expression analyses by qRT-PCR (quantitative reverse transcription PCR) require an internal control to normalize the expression data of genes of interest and eliminate the unwanted variation introduced by sample preparation. A perfect reference gene should have a constant expression level under all the experimental conditions. However, the same few housekeeping genes selected from the literature or successfully used in previous unrelated experiments are often routinely used in new conditions without proper validation of their stability across treatments. The advent of RNA-Seq and the availability of public datasets for numerous organisms are opening the way to finding better reference genes for expression studies. Globodera rostochiensis is a plant-parasitic nematode that is particularly yield-limiting for potato. The aim of our study was to identify a reliable set of reference genes to study G. rostochiensis gene expression. Gene expression levels from an RNA-Seq database were used to identify putative reference genes and were validated with qRT-PCR analysis. Three genes, GR, PMP-3, and aaRS, were found to be very stable within the experimental conditions of this study and are proposed as reference genes for future work.

  5. [The requirements of standard and conditions of interchangeability of medical articles].

    PubMed

    Men'shikov, V V; Lukicheva, T I

    2013-11-01

    The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.

  6. Incremental Validity of WISC-IV[superscript UK] Factor Index Scores with a Referred Irish Sample: Predicting Performance on the WIAT-II[superscript UK

    ERIC Educational Resources Information Center

    Canivez, Gary L.; Watkins, Marley W.; James, Trevor; Good, Rebecca; James, Kate

    2014-01-01

    Background: Subtest and factor scores have typically provided little incremental predictive validity beyond the omnibus IQ score. Aims: This study examined the incremental validity of Wechsler Intelligence Scale for Children-Fourth UK Edition (WISC-IV[superscript UK]; Wechsler, 2004a, "Wechsler Intelligence Scale for Children-Fourth UK…

  7. Challenges in Implementing National Systems of Competency Validation with Regard to Adult Learning Professionals: Perspectives from Romania and India

    ERIC Educational Resources Information Center

    Sava, Simona Lidia; Shah, S. Y.

    2015-01-01

    Validation of prior learning (VPL), also referred to as recognition, validation and accreditation of prior learning (RVA), is becoming an increasingly important political issue at both European and international levels. In 2012, the European Council, the UNESCO Institute for Lifelong Learning (UIL) and the Organisation for Economic Co-operation…

  8. Constructing a Validity Argument for the Objective Structured Assessment of Technical Skills (OSATS): A Systematic Review of Validity Evidence

    ERIC Educational Resources Information Center

    Hatala, Rose; Cook, David A.; Brydges, Ryan; Hawkins, Richard

    2015-01-01

    In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected…

  9. Londrina Activities of Daily Living Protocol: Reproducibility, Validity, and Reference Values in Physically Independent Adults Age 50 Years and Older.

    PubMed

    Paes, Thaís; Belo, Letícia Fernandes; da Silva, Diego Rodrigues; Morita, Andrea Akemi; Donária, Leila; Furlanetto, Karina Couto; Sant'Anna, Thaís; Pitta, Fabio; Hernandes, Nidia Aparecida

    2017-03-01

    It is important to assess activities of daily living (ADL) in older adults due to impairment of independence and quality of life. However, there is no objective and standardized protocol available to assess this outcome. Thus, the aim of this study was to verify the reproducibility and validity of a new protocol for ADL assessment applied in physically independent adults age ≥50 y, the Londrina ADL protocol, and to establish an equation to predict reference values of the Londrina ADL protocol. Ninety-three physically independent adults age ≥50 y had their performance in ADL evaluated by registering the time spent to conclude the protocol. The protocol was performed twice. The 6-min walk test, which assesses functional exercise capacity, was used as a validation criterion. A multiple linear regression model was applied, including anthropometric and demographic variables that correlated with the protocol, to establish an equation to predict the protocol's reference values. In general, the protocol was reproducible (intraclass correlation coefficient 0.91). The average difference between the first and second protocol was 5.3%. The new protocol was valid to assess ADL performance in the studied subjects, presenting a moderate correlation with the 6-min walk test (r = -0.53). The time spent to perform the protocol correlated significantly with age (r = 0.45) but neither with weight (r = -0.17) nor with height (r = -0.17). A model of stepwise multiple regression including sex and age showed that age was the only determinant factor to the Londrina ADL protocol, explaining 21% ( P < .001) of its variability. The derived reference equation was: Londrina ADL protocol pred (s) = 135.618 + (3.102 × age [y]). The Londrina ADL protocol was reproducible and valid in physically independent adults age ≥50 y. A reference equation for the protocol was established including only age as an independent variable (r 2 = 0.21), allowing a better interpretation of the protocol's results in clinical practice. Copyright © 2017 by Daedalus Enterprises.

  10. Determination of the purity of pharmaceutical reference materials by 1H NMR using the standardless PULCON methodology.

    PubMed

    Monakhova, Yulia B; Kohl-Himmelseher, Matthias; Kuballa, Thomas; Lachenmeier, Dirk W

    2014-11-01

    A fast and reliable nuclear magnetic resonance spectroscopic method for quantitative determination (qNMR) of targeted molecules in reference materials has been established using the ERETIC2 methodology (electronic reference to access in vivo concentrations) based on the PULCON principle (pulse length based concentration determination). The developed approach was validated for the analysis of pharmaceutical samples in the context of official medicines control, including ibandronic acid, amantadine, ambroxol and lercanidipine. The PULCON recoveries were above 94.3% and coefficients of variation (CVs) obtained by quantification of different targeted resonances ranged between 0.7% and 2.8%, demonstrating that the qNMR method is a precise tool for rapid quantification (approximately 15min) of reference materials and medicinal products. Generally, the values were within specification (certified values) provided by the manufactures. The results were in agreement with NMR quantification using an internal standard and validated reference HPLC analysis. The PULCON method was found to be a practical alternative with competitive precision and accuracy to the classical internal reference method and it proved to be applicable to different solvent conditions. The method can be recommended for routine use in medicines control laboratories, especially when the availability and costs of reference compounds are problematic. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Understanding administrative abdominal aortic aneurysm mortality data.

    PubMed

    Hussey, K; Siddiqui, T; Burton, P; Welch, G H; Stuart, W P

    2015-03-01

    Administrative data in the form of Hospital Episode Statistics (HES) and the Scottish Morbidity Record (SMR) have been used to describe surgical activity. These data have also been used to compare outcomes from different hospitals and regions, and to corroborate data submitted to national audits and registries. The aim of this observational study was to examine the completeness and accuracy of administrative data relating to abdominal aortic aneurysm (AAA) repair. Administrative data (SMR-01 returns) from a single health board relating to AAA repair were requested (September 2007 to August 2012). A complete list of validated procedures; termed the reference data set was compiled from all available sources (clinical and administrative). For each patient episode electronic health records were scrutinised to confirm urgency of admission, diagnosis, and operative repair. The 30-day mortality was recorded. The reference data set was used to systematically validate the SMR-01 returns. The reference data set contained 608 verified procedures. SMR-01 returns identified 2433 episodes of care (1724 patients) in which a discharge diagnosis included AAA. This included 574 operative repairs. There were 34 missing cases (5.6%) from SMR-01 returns; nine of these patients died within 30 days of the index procedure. Omission of these cases made a statistically significant improvement to perceived 30-day mortality (p < .05, chi-square test). If inconsistent SMR-01 data (in terms of ICD-10 and OPCS-4 codes) were excluded only 81.9% of operative repairs were correctly identified and only 30.9% of deaths were captured. The SMR-01 returns contain multiple errors. There also appears to be a systematic bias that reduces apparent 30-day mortality. Using these data alone to describe or compare activity or outcomes must be done with caution. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  12. [Validation of BREV: comparison with reference battery in 173 children with learning disorders].

    PubMed

    Billard, C; Ducot, B; Pinton, F; Coste-Zeitoun, D; Picard, S; Warszawski, J

    2006-01-01

    The BREV battery (Battery for rapid evaluation of cognitive functions) is a tool which can be used for the rapid neuropsychological evaluation of children aged between 4 and 9 years. After standardization (700 unaffected children) and validation by comparison with a reference battery (202 children with epilepsy), the aim of this study was further validation in 173 children with learning disorders. The study protocol included administration of the BREV, precise neuropsychological examination and evaluation of oral and written language. Statistical analysis was used to compare the findings of the BREV with those of the reference method, and the recommendations indicated by the BREV with the final diagnoses, and to define the sensitivity and the specificity of the BREV battery. All the correlations between BREV tests and reference tests were significant. Recommendations after the BREV were in agreement with the conclusions of the reference evaluation in 168/172 children for language, 145/173 for the psychometric evaluation. For only 4 chidren, the results of the BREV were false negative. Diagnoses corresponded in 168/173 children for oral language, in 102/110 for written language, 166/173 for praxis disorders and 157/173 for intellectual deficit. The most predictive subtests of the BREV and sensitivity and specificity of verbal and non-verbal scores were calculated. The BREV is a reliable examination, in learning disorders, to determine the most complementary investigations both in terms of language disorders and for non-verbal or global learning disabilities.

  13. Middle Atmosphere Program. Handbook for MAP. Volume 16: Atmospheric Structure and Its Variation in the Region 20 to 120 Km. Draft of a New Reference Middle Atmosphere

    NASA Technical Reports Server (NTRS)

    Labitzke, K. (Editor); Barnett, J. J. (Editor); Edwards, B. (Editor)

    1985-01-01

    A draft of a new reference atmosphere for the region between 20 and 80 km which depends largely on recent satellite experiments covering the globe from 80 deg S to 80 deg N is given. A separate international tropical reference atmosphere is given, as well as reference ozone models for the middle atmosphere.

  14. Validation of two complementary oral-health related quality of life indicators (OIDP and OSS 0-10 ) in two qualitatively distinct samples of the Spanish population

    PubMed Central

    Montero, J; Bravo, M; Albaladejo, A

    2008-01-01

    Background Oral health-related quality of life can be assessed positively, by measuring satisfaction with mouth, or negatively, by measuring oral impact on the performance of daily activities. The study objective was to validate two complementary indicators, i.e., the OIDP (Oral Impacts on Daily Performances) and Oral Satisfaction 0–10 Scale (OSS), in two qualitatively different socio-demographic samples of the Spanish adult population, and to analyse the factors affecting both perspectives of well-being. Methods A cross-sectional study was performed, recruiting a Validation Sample from randomly selected Health Centres in Granada (Spain), representing the general population (n = 253), and a Working Sample (n = 561) randomly selected from active Regional Government staff, i.e., representing the more privileged end of the socio-demographic spectrum of this reference population. All participants were examined according to WHO methodology and completed an in-person interview on their oral impacts and oral satisfaction using the OIDP and OSS 0–10 respectively. The reliability and validity of the two indicators were assessed. An alternative method of describing the causes of oral impacts is presented. Results The reliability coefficient (Cronbach's alpha) of the OIDP was above the recommended 0.7 threshold in both Validation and Occupational samples (0.79 and 0.71 respectively). Test-retest analysis confirmed the external reliability of the OSS (Intraclass Correlation Coefficient, 0.89; p < 0.001) Some subjective factors (perceived need for dental treatment, complaints about mouth and intermediate impacts) were strongly associated with both indicators, supporting their construct and criterion validity. The main cause of oral impact was dental pain. Several socio-demographic, behavioural and clinical variables were identified as modulating factors. Conclusion OIDP and OSS are valid and reliable subjective measures of oral impacts and oral satisfaction, respectively, in an adult Spanish population. Exploring simultaneously these issues may provide useful insights into how satisfaction and impact on well-being are constructed. PMID:19019208

  15. Regionalized rainfall-runoff model to estimate low flow indices

    NASA Astrophysics Data System (ADS)

    Garcia, Florine; Folton, Nathalie; Oudin, Ludovic

    2016-04-01

    Estimating low flow indices is of paramount importance to manage water resources and risk assessments. These indices are derived from river discharges which are measured at gauged stations. However, the lack of observations at ungauged sites bring the necessity of developing methods to estimate these low flow indices from observed discharges in neighboring catchments and from catchment characteristics. Different estimation methods exist. Regression or geostatistical methods performed on the low flow indices are the most common types of methods. Another less common method consists in regionalizing rainfall-runoff model parameters, from catchment characteristics or by spatial proximity, to estimate low flow indices from simulated hydrographs. Irstea developed GR2M-LoiEau, a conceptual monthly rainfall-runoff model, combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on only two parameters, which are regionalized and mapped throughout France. This model allows to cartography monthly reference low flow indices. The inputs data come from SAFRAN, the distributed mesoscale atmospheric analysis system, which provides daily solid and liquid precipitation and temperature data from everywhere in the French territory. To exploit fully these data and to estimate daily low flow indices, a new version of GR-LoiEau has been developed at a daily time step. The aim of this work is to develop and regionalize a GR-LoiEau model that can provide any daily, monthly or annual estimations of low flow indices, yet keeping only a few parameters, which is a major advantage to regionalize them. This work includes two parts. On the one hand, a daily conceptual rainfall-runoff model is developed with only three parameters in order to simulate daily and monthly low flow indices, mean annual runoff and seasonality. On the other hand, different regionalization methods, based on spatial proximity and similarity, are tested to estimate the model parameters and to simulate low flow indices in ungauged sites. The analysis is carried out on 691 French catchments that are representative of various hydro-meteorological behaviors. The results are validated with a cross-validation procedure and are compared with the ones obtained with GR4J, a conceptual rainfall-runoff model, which already provides daily estimations, but involves four parameters that cannot easily be regionalized.

  16. Validation of SMAP Root Zone Soil Moisture Estimates with Improved Cosmic-Ray Neutron Probe Observations

    NASA Astrophysics Data System (ADS)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.

    2017-12-01

    Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.

  17. Risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions.

    PubMed

    Yang, Yu; Lian, Xin-Ying; Jiang, Yong-Hai; Xi, Bei-Dou; He, Xiao-Song

    2017-11-01

    Agricultural regions are a significant source of groundwater pesticide pollution. To ensure that agricultural regions with a significantly high risk of groundwater pesticide contamination are properly managed, a risk-based ranking method related to groundwater pesticide contamination is needed. In the present paper, a risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions was established. The method encompasses 3 phases, including indicator selection, characterization, and classification. In the risk ranking index system employed here, 17 indicators involving the physicochemical properties, environmental behavior characteristics, pesticide application methods, and inherent vulnerability of groundwater in the agricultural region were selected. The boundary of each indicator was determined using K-means cluster analysis based on a survey of a typical agricultural region and the physical and chemical properties of 300 typical pesticides. The total risk characterization was calculated by multiplying the risk value of each indicator, which could effectively avoid the subjectivity of index weight calculation and identify the main factors associated with the risk. The results indicated that the risk for groundwater pesticide contamination from agriculture in a region could be ranked into 4 classes from low to high risk. This method was applied to an agricultural region in Jiangsu Province, China, and it showed that this region had a relatively high risk for groundwater contamination from pesticides, and that the pesticide application method was the primary factor contributing to the relatively high risk. The risk ranking method was determined to be feasible, valid, and able to provide reference data related to the risk management of groundwater pesticide pollution from agricultural regions. Integr Environ Assess Manag 2017;13:1052-1059. © 2017 SETAC. © 2017 SETAC.

  18. Breast Reference Set Application: Chris Li-FHCRC (2015) — EDRN Public Portal

    Cancer.gov

    We propose to evaluate nine candidate biomarkers for ER+ breast cancer in samples from the EDRN Breast Cancer Reference Set. These biomarkers have been preliminarily validated in preclinical samples. The intended clinical applications of these markers are to: 1. Inform timing of a subsequent mammogram in women with a negative screening mammogram; 2. Inform continuation of mammographic screening among women 75-79 years; 3. Prioritize women who should be screened with mammography in areas with limited resources. Testing the reference samples would further expedite addressing these intended clinical applications by providing further validation data to support requests for samples from other sources for further Phase 3 evaluation (e.g., WHI, PLCO, and samples collected at the time of mammographic screening from the University of Toronto and UCSF).

  19. Development of a reference material for routine performance monitoring of methods measuring polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and dioxin-like polychlorinated biphenyls.

    PubMed

    Selliah, S S; Cussion, S; MacPherson, K A; Reiner, E J; Toner, D

    2001-06-01

    Matrix-matched environmental certified reference materials (CRMs) are one of the most useful tools to validate analytical methods, assess analytical laboratory performance and to assist in the resolution of data conflicts between laboratories. This paper describes the development of a lake sediment as a CRM for polychorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like polychlorinated biphenyls (DLPCBs). The presence of DLPCBs in the environment is of increased concern and analytical methods are being developed internationally for monitoring DLPCBs in the environment. This paper also reports the results of an international interlaboratory study involving thirty-five laboratories from seventeen countries, conducted to characterize and validate levels of a sediment reference material for PCDDs, PCDFs and DLPCBs.

  20. Global Land Product Validation Protocols: An Initiative of the CEOS Working Group on Calibration and Validation to Evaluate Satellite-derived Essential Climate Variables

    NASA Astrophysics Data System (ADS)

    Guillevic, P. C.; Nickeson, J. E.; Roman, M. O.; camacho De Coca, F.; Wang, Z.; Schaepman-Strub, G.

    2016-12-01

    The Global Climate Observing System (GCOS) has specified the need to systematically produce and validate Essential Climate Variables (ECVs). The Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and in particular its subgroup on Land Product Validation (LPV) is playing a key coordination role leveraging the international expertise required to address actions related to the validation of global land ECVs. The primary objective of the LPV subgroup is to set standards for validation methods and reporting in order to provide traceable and reliable uncertainty estimates for scientists and stakeholders. The Subgroup is comprised of 9 focus areas that encompass 10 land surface variables. The activities of each focus area are coordinated by two international co-leads and currently include leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FAPAR), vegetation phenology, surface albedo, fire disturbance, snow cover, land cover and land use change, soil moisture, land surface temperature (LST) and emissivity. Recent additions to the focus areas include vegetation indices and biomass. The development of best practice validation protocols is a core activity of CEOS LPV with the objective to standardize the evaluation of land surface products. LPV has identified four validation levels corresponding to increasing spatial and temporal representativeness of reference samples used to perform validation. Best practice validation protocols (1) provide the definition of variables, ancillary information and uncertainty metrics, (2) describe available data sources and methods to establish reference validation datasets with SI traceability, and (3) describe evaluation methods and reporting. An overview on validation best practice components will be presented based on the LAI and LST protocol efforts to date.

  1. Global cross-station assessment of neuro-fuzzy models for estimating daily reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal; Nazemi, Amir Hossein; Sadraddini, Ali Ashraf; Landeras, Gorka; Kisi, Ozgur; Fard, Ahmad Fakheri; Marti, Pau

    2013-02-01

    SummaryAccurate estimation of reference evapotranspiration is important for irrigation scheduling, water resources management and planning and other agricultural water management issues. In the present paper, the capabilities of generalized neuro-fuzzy models were evaluated for estimating reference evapotranspiration using two separate sets of weather data from humid and non-humid regions of Spain and Iran. In this way, the data from some weather stations in the Basque Country and Valencia region (Spain) were used for training the neuro-fuzzy models [in humid and non-humid regions, respectively] and subsequently, the data from these regions were pooled to evaluate the generalization capability of a general neuro-fuzzy model in humid and non-humid regions. The developed models were tested in stations of Iran, located in humid and non-humid regions. The obtained results showed the capabilities of generalized neuro-fuzzy model in estimating reference evapotranspiration in different climatic zones. Global GNF models calibrated using both non-humid and humid data were found to successfully estimate ET0 in both non-humid and humid regions of Iran (the lowest MAE values are about 0.23 mm for non-humid Iranian regions and 0.12 mm for humid regions). non-humid GNF models calibrated using non-humid data performed much better than the humid GNF models calibrated using humid data in non-humid region while the humid GNF model gave better estimates in humid region.

  2. The development and appraisal of a tool designed to find patients harmed by falsely labelled, falsified (counterfeit) medicines.

    PubMed

    Anđelković, Marija; Björnsson, Einar; De Bono, Virgilio; Dikić, Nenad; Devue, Katleen; Ferlin, Daniel; Hanževački, Miroslav; Jónsdóttir, Freyja; Shakaryan, Mkrtich; Walser, Sabine

    2017-06-20

    Falsely labelled, falsified (counterfeit) medicines (FFCm's) are produced or distributed illegally and can harm patients. Although the occurrence of FFCm's is increasing in Europe, harm is rarely reported. The European Directorate for the Quality of Medicines & Health-Care (EDQM) has therefore coordinated the development and validation of a screening tool. The tool consists of a questionnaire referring to a watch-list of FFCm's identified in Europe, including symptoms of their use and individual risk factors, and a scoring form. To refine the questionnaire and reference method, a pilot-study was performed in 105 self-reported users of watch-list medicines. Subsequently, the tool was validated under "real-life conditions" in 371 patients in 5 ambulatory and in-patient care sites ("sub-studies"). The physicians participating in the study scored the patients and classified their risk of harm as "unlikely" or "probable" (cut-off level: presence of ≥2 of 5 risk factors). They assessed all medical records retrospectively (independent reference method) to validate the risk classification and documented their perception of the tool's value. In 3 ambulatory care sites (180 patients), the tool correctly classified 5 patients as harmed by FFCm's. The positive and negative likelihood ratios (LR+/LR-) and the discrimination power were calculated for two cut-off levels: a) 1 site (50 patients): presence of two risk factors (at 10% estimated health care system contamination with FFCm's): LR + 4.9/LR-0, post-test probability: 35%; b) 2 sites (130 patients): presence of three risk factors (at 5% estimated prevalence of use of non-prescribed medicines (FFCm's) by certain risk groups): LR + 9.7/LR-0, post-test probability: 33%. In 2 in-patient care sites (191 patients), no patient was confirmed as harmed by FFCm's. The physicians perceived the tool as valuable for finding harm, and as an information source regarding risk factors. This "decision aid" is a systematic tool which helps find in medical practice patients harmed by FFCm's. This study supports its value in ambulatory care in regions with health care system contamination and in certain risk groups. The establishment of systematic communication between authorities and the medical community concerning FFCm's, current patterns of use and case reports may sustain positive public health impacts.

  3. Neural correlates of free recall of "famous events" in a "hypermnestic" individual as compared to an age- and education-matched reference group.

    PubMed

    Fehr, Thorsten; Staniloiu, Angelica; Markowitsch, Hans J; Erhard, Peter; Herrmann, Manfred

    2018-06-19

    Memory performance of an individual (within the age range: 50-55 years old) showing superior memory abilities (protagonist PR) was compared to an age- and education-matched reference group in a historical facts ("famous events") retrieval task. Contrasting task versus baseline performance both PR and the reference group showed fMRI activation patterns in parietal and occipital brain regions. The reference group additionally demonstrated activation patterns in cingulate gyrus, whereas PR showed additional widespread activation patterns comprising frontal and cerebellar brain regions. The direct comparison between PR and the reference group revealed larger fMRI contrasts for PR in right frontal, superior temporal and cerebellar brain regions. It was concluded that PR generally recruits brain regions as normal memory performers do, but in a more elaborate way, and furthermore, that he applied a memory-strategy that potentially includes executively driven multi-modal transcoding of information and recruitment of implicit memory resources.

  4. Validation assessment of shoreline extraction on medium resolution satellite image

    NASA Astrophysics Data System (ADS)

    Manaf, Syaifulnizam Abd; Mustapha, Norwati; Sulaiman, Md Nasir; Husin, Nor Azura; Shafri, Helmi Zulhaidi Mohd

    2017-10-01

    Monitoring coastal zones helps provide information about the conditions of the coastal zones, such as erosion or accretion. Moreover, monitoring the shorelines can help measure the severity of such conditions. Such measurement can be performed accurately by using Earth observation satellite images rather than by using traditional ground survey. To date, shorelines can be extracted from satellite images with a high degree of accuracy by using satellite image classification techniques based on machine learning to identify the land and water classes of the shorelines. In this study, the researchers validated the results of extracted shorelines of 11 classifiers using a reference shoreline provided by the local authority. Specifically, the validation assessment was performed to examine the difference between the extracted shorelines and the reference shorelines. The research findings showed that the SVM Linear was the most effective image classification technique, as evidenced from the lowest mean distance between the extracted shoreline and the reference shoreline. Furthermore, the findings showed that the accuracy of the extracted shoreline was not directly proportional to the accuracy of the image classification.

  5. Determination of Perfluorinated Alkyl Acid Concentrations in Biological Standard Reference Materials

    EPA Science Inventory

    Standard reference materials (SRMs) are homogeneous, well-characterized materials used to validate measurements and improve the quality of analytical data. The National Institute of Standards and Technology (NIST) has a wide range of SRMs that have mass fraction values assigned ...

  6. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    PubMed

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  7. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya I.; Wilmes, Lisa J.; Arlinghaus, Lori R.; Jacobs, Michael A.; Huang, Wei; Helmer, Karl G.; Taouli, Bachir; Yankeelov, Thomas E.; Newitt, David; Chenevert, Thomas L.

    2017-01-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, −35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI. PMID:28105469

  8. Characteristics and external validity of the German Health Risk Institute (HRI) Database.

    PubMed

    Andersohn, Frank; Walker, Jochen

    2016-01-01

    The aim of this study was to describe characteristics and external validity of the German Health Risk Institute (HRI) Database. The HRI Database is an anonymized healthcare database with longitudinal data from approximately six Mio Germans. In addition to demographic information (gender, age, region of residence), data on persistence of insurants over time, hospitalization rates, mortality rates and drug prescription rates were extracted from the HRI database for 2013. Corresponding national reference data were obtained from official sources. The proportion of men and women was similar in the HRI Database and Germany, but the database population was slightly younger (mean 40.4 vs 43.7 years). The proportion of insurants living in the eastern part of Germany was lower in the HRI Database (10.1% vs 19.7%). There was good accordance to German reference data with respect to hospitalization rates, overall mortality rate and prescription rates for the 20 most often reimbursed drug classes, with the overall burden of morbidity being slightly lower in the HRI database. From insurants insured on 1 January 2009 (N = 6.2 Mio), a total of 70.6% survived and remained continuously insured with the same statutory health insurance until 31 December 2013. This proportion increased to 77.5% if only insurants ≥40 years were considered. There was good overall accordance of the HRI database and the German population in terms of measures of morbidity, mortality and drug usage. Persistence of insurants with the database over time was high, indicating suitability of the data source for longitudinal epidemiological analyses. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Monitoring Water Resources in Pastoral Areas of East Africa Using Satellite Data and Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Alemu, H.; Senay, G. B.; Velpuri, N.; Asante, K. O.

    2008-12-01

    The nomadic pastoral communities in East Africa heavily depend on small water bodies and artificial lakes for domestic and livestock uses. The shortage of water in the region has made these water resources of great importance to them and sometimes even the reason for conflicts amongst rival communities in the region. Satellite-based data has significantly transformed the way we track and estimate hydrological processes such as precipitation and evapotranspiration. This approach has been particularly useful in remote places where conventional station-based weather networks are scarce. Tropical Rainfall Measuring Mission (TRMM) satellite data were extracted for the study region. National Oceanic and Atmospheric Administration's (NOAA) Global Data Assimilation System (GDAS) data were used to extract the climatic parameters needed to calculate reference evapotranspiration. The elevation data needed to delineate the watersheds were extracted from the Shuttle Radar Topography Mission (SRTM) with spatial resolution of 90m. The waterholes (most of which have average surface area less than a hectare) were identified using Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) images with a spatial resolution of 15 m. As part of National Aeronautics and Space Administration's (NASA) funded enhancement to a livestock early warning decision support system, a simple hydrologic water balance model was developed to estimate daily waterhole depth variations. The model was run for over 10 years from 1998 till 2008 for 10 representative waterholes in the region. Although there were no independent datasets to validate the results, the temporal patterns captured both the seasonal and inter-annual variations, depicting known drought and flood years. Future research includes the installation of staff-gauges for model calibration and validation. The simple modeling approach demonstrated the effectiveness of integrating dynamic coarse resolution datasets such as TRMM with high resolution static datasets such as ASTER and SRTM DEM (Digital Elevation Model) to monitor water resources for drought early warning applications.

  10. Validation of reference genes aiming accurate normalization of qRT-PCR data in Dendrocalamus latiflorus Munro.

    PubMed

    Liu, Mingying; Jiang, Jing; Han, Xiaojiao; Qiao, Guirong; Zhuo, Renying

    2014-01-01

    Dendrocalamus latiflorus Munro distributes widely in subtropical areas and plays vital roles as valuable natural resources. The transcriptome sequencing for D. latiflorus Munro has been performed and numerous genes especially those predicted to be unique to D. latiflorus Munro were revealed. qRT-PCR has become a feasible approach to uncover gene expression profiling, and the accuracy and reliability of the results obtained depends upon the proper selection of stable reference genes for accurate normalization. Therefore, a set of suitable internal controls should be validated for D. latiflorus Munro. In this report, twelve candidate reference genes were selected and the assessment of gene expression stability was performed in ten tissue samples and four leaf samples from seedlings and anther-regenerated plants of different ploidy. The PCR amplification efficiency was estimated, and the candidate genes were ranked according to their expression stability using three software packages: geNorm, NormFinder and Bestkeeper. GAPDH and EF1α were characterized to be the most stable genes among different tissues or in all the sample pools, while CYP showed low expression stability. RPL3 had the optimal performance among four leaf samples. The application of verified reference genes was illustrated by analyzing ferritin and laccase expression profiles among different experimental sets. The analysis revealed the biological variation in ferritin and laccase transcript expression among the tissues studied and the individual plants. geNorm, NormFinder, and BestKeeper analyses recommended different suitable reference gene(s) for normalization according to the experimental sets. GAPDH and EF1α had the highest expression stability across different tissues and RPL3 for the other sample set. This study emphasizes the importance of validating superior reference genes for qRT-PCR analysis to accurately normalize gene expression of D. latiflorus Munro.

  11. Validation of Reference Genes for Quantitative Expression Analysis by Real-Time RT-PCR in Four Lepidopteran Insects

    PubMed Central

    Teng, Xiaolu; Zhang, Zan; He, Guiling; Yang, Liwen; Li, Fei

    2012-01-01

    Quantitative real-time polymerase chain reaction (qPCR) is an efficient and widely used technique to monitor gene expression. Housekeeping genes (HKGs) are often empirically selected as the reference genes for data normalization. However, the suitability of HKGs used as the reference genes has been seldom validated. Here, six HKGs were chosen (actin A3, actin A1, GAPDH, G3PDH, E2F, rp49) in four lepidopteran insects Bombyx mori L. (Lepidoptera: Bombycidae), Plutella xylostella L. (Plutellidae), Chilo suppressalis Walker (Crambidae), and Spodoptera exigua Hübner (Noctuidae) to study their expression stability. The algorithms of geNorm, NormFinder, stability index, and ΔCt analysis were used to evaluate these HKGs. Across different developmental stages, actin A1 was the most stable in P. xylostella and C. suppressalis, but it was the least stable in B. mori and S. exigua. Rp49 and GAPDH were the most stable in B. mori and S. exigua, respectively. In different tissues, GAPDH, E2F, and Rp49 were the most stable in B. mori, S. exigua, and C. suppressalis, respectively. The relative abundances of Siwi genes estimated by 2-ΔΔCt method were tested with different HKGs as the reference gene, proving the importance of internal controls in qPCR data analysis. The results not only presented a list of suitable reference genes in four lepidopteran insects, but also proved that the expression stabilities of HKGs were different among evolutionarily close species. There was no single universal reference gene that could be used in all situations. It is indispensable to validate the expression of HKGs before using them as the internal control in qPCR. PMID:22938136

  12. Validation of reference genes for quantitative expression analysis by real-time rt-PCR in four lepidopteran insects.

    PubMed

    Teng, Xiaolu; Zhang, Zan; He, Guiling; Yang, Liwen; Li, Fei

    2012-01-01

    Quantitative real-time polymerase chain reaction (qPCR) is an efficient and widely used technique to monitor gene expression. Housekeeping genes (HKGs) are often empirically selected as the reference genes for data normalization. However, the suitability of HKGs used as the reference genes has been seldom validated. Here, six HKGs were chosen (actin A3, actin A1, GAPDH, G3PDH, E2F, rp49) in four lepidopteran insects Bombyx mori L. (Lepidoptera: Bombycidae), Plutella xylostella L. (Plutellidae), Chilo suppressalis Walker (Crambidae), and Spodoptera exigua Hübner (Noctuidae) to study their expression stability. The algorithms of geNorm, NormFinder, stability index, and ΔCt analysis were used to evaluate these HKGs. Across different developmental stages, actin A1 was the most stable in P. xylostella and C. suppressalis, but it was the least stable in B. mori and S. exigua. Rp49 and GAPDH were the most stable in B. mori and S. exigua, respectively. In different tissues, GAPDH, E2F, and Rp49 were the most stable in B. mori, S. exigua, and C. suppressalis, respectively. The relative abundances of Siwi genes estimated by 2(-ΔΔCt) method were tested with different HKGs as the reference gene, proving the importance of internal controls in qPCR data analysis. The results not only presented a list of suitable reference genes in four lepidopteran insects, but also proved that the expression stabilities of HKGs were different among evolutionarily close species. There was no single universal reference gene that could be used in all situations. It is indispensable to validate the expression of HKGs before using them as the internal control in qPCR.

  13. CRISPR/Cas9 Technology-Based Xenograft Tumors as Candidate Reference Materials for Multiple EML4-ALK Rearrangements Testing.

    PubMed

    Peng, Rongxue; Zhang, Rui; Lin, Guigao; Yang, Xin; Li, Ziyang; Zhang, Kuo; Zhang, Jiawei; Li, Jinming

    2017-09-01

    The echinoderm microtubule-associated protein-like 4 and anaplastic lymphoma kinase (ALK) receptor tyrosine kinase (EML4-ALK) rearrangement is an important biomarker that plays a pivotal role in therapeutic decision making for non-small-cell lung cancer (NSCLC) patients. Ensuring accuracy and reproducibility of EML4-ALK testing by fluorescence in situ hybridization, immunohistochemistry, RT-PCR, and next-generation sequencing requires reliable reference materials for monitoring assay sensitivity and specificity. Herein, we developed novel reference materials for various kinds of EML4-ALK testing. CRISPR/Cas9 was used to edit various NSCLC cell lines containing EML4-ALK rearrangement variants 1, 2, and 3a/b. After s.c. inoculation, the formalin-fixed, paraffin-embedded (FFPE) samples from xenografts were prepared and tested for suitability as candidate reference materials by fluorescence in situ hybridization, immunohistochemistry, RT-PCR, and next-generation sequencing. Sample validation and commutability assessments showed that all types of FFPE samples derived from xenograft tumors have typical histological structures, and EML4-ALK testing results were similar to the clinical ALK-positive NSCLC specimens. Among the four methods for EML4-ALK detection, the validation test showed 100% concordance. Furthermore, these novel FFPE reference materials showed good stability and homogeneity. Without limitations on variant types and production, our novel FFPE samples based on CRISPR/Cas9 editing and xenografts are suitable as candidate reference materials for the validation, verification, internal quality control, and proficiency testing of EML4-ALK detection. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  14. The establishment of a WHO Reference Reagent for anti-malaria (Plasmodium falciparum) human serum.

    PubMed

    Bryan, Donna; Silva, Nilupa; Rigsby, Peter; Dougall, Thomas; Corran, Patrick; Bowyer, Paul W; Ho, Mei Mei

    2017-08-05

    At a World Health Organization (WHO) sponsored meeting it was concluded that there is an urgent need for a reference preparation that contains antibodies against malaria antigens in order to support serology studies and vaccine development. It was proposed that this reference would take the form of a lyophilized serum or plasma pool from a malaria-endemic area. In response, an immunoassay standard, comprising defibrinated human plasma has been prepared and evaluated in a collaborative study. A pool of human plasma from a malaria endemic region was collected from 140 single plasma donations selected for reactivity to Plasmodium falciparum apical membrane antigen-1 (AMA-1) and merozoite surface proteins (MSP-1 19 , MSP-1 42 , MSP-2 and MSP-3). This pool was defibrinated, filled and freeze dried into a single batch of ampoules to yield a stable source of naturally occurring antibodies to P. falciparum. The preparation was evaluated by an enzyme-linked immunosorbent assay (ELISA) in a collaborative study with sixteen participants from twelve different countries. This anti-malaria human serum preparation (NIBSC Code: 10/198) was adopted by the WHO Expert Committee on Biological Standardization (ECBS) in October 2014, as the first WHO reference reagent for anti-malaria (Plasmodium falciparum) human serum with an assigned arbitrary unitage of 100 units (U) per ampoule. Analysis of the reference reagent in a collaborative study has demonstrated the benefit of this preparation for the reduction in inter- and intra-laboratory variability in ELISA. Whilst locally sourced pools are regularly use for harmonization both within and between a few laboratories, the presence of a WHO-endorsed reference reagent should enable optimal harmonization of malaria serological assays either by direct use of the reference reagent or calibration of local standards against this WHO reference. The intended uses of this reference reagent, a multivalent preparation, are (1) to allow cross-comparisons of results of vaccine trials performed in different centres/with different products; (2) to facilitate standardization and harmonization of immunological assays used in epidemiology research; and (3) to allow optimization and validation of immunological assays used in malaria vaccine development.

  15. MolProbity: More and better reference data for improved all-atom structure validation.

    PubMed

    Williams, Christopher J; Headd, Jeffrey J; Moriarty, Nigel W; Prisant, Michael G; Videau, Lizbeth L; Deis, Lindsay N; Verma, Vishal; Keedy, Daniel A; Hintze, Bradley J; Chen, Vincent B; Jain, Swati; Lewis, Steven M; Arendall, W Bryan; Snoeyink, Jack; Adams, Paul D; Lovell, Simon C; Richardson, Jane S; Richardson, David C

    2018-01-01

    This paper describes the current update on macromolecular model validation services that are provided at the MolProbity website, emphasizing changes and additions since the previous review in 2010. There have been many infrastructure improvements, including rewrite of previous Java utilities to now use existing or newly written Python utilities in the open-source CCTBX portion of the Phenix software system. This improves long-term maintainability and enhances the thorough integration of MolProbity-style validation within Phenix. There is now a complete MolProbity mirror site at http://molprobity.manchester.ac.uk. GitHub serves our open-source code, reference datasets, and the resulting multi-dimensional distributions that define most validation criteria. Coordinate output after Asn/Gln/His "flip" correction is now more idealized, since the post-refinement step has apparently often been skipped in the past. Two distinct sets of heavy-atom-to-hydrogen distances and accompanying van der Waals radii have been researched and improved in accuracy, one for the electron-cloud-center positions suitable for X-ray crystallography and one for nuclear positions. New validations include messages at input about problem-causing format irregularities, updates of Ramachandran and rotamer criteria from the million quality-filtered residues in a new reference dataset, the CaBLAM Cα-CO virtual-angle analysis of backbone and secondary structure for cryoEM or low-resolution X-ray, and flagging of the very rare cis-nonProline and twisted peptides which have recently been greatly overused. Due to wide application of MolProbity validation and corrections by the research community, in Phenix, and at the worldwide Protein Data Bank, newly deposited structures have continued to improve greatly as measured by MolProbity's unique all-atom clashscore. © 2017 The Protein Society.

  16. [Drug-promoting advertisements in the Dutch Journal of Medicine and Pharmaceutical Weekly: not always evidence based].

    PubMed

    van Eeden, Annelies E; Roach, Rachel E J; Halbesma, Nynke; Dekker, Friedo W

    2012-01-01

    To determine and compare the foundation of claims in drug-promoting advertisements in a Dutch journal for physicians and a Dutch journal for pharmacists. A cross-sectional study. We included all the drug-promoting advertisements referring to a randomized controlled trial (RCT) we could find on Medline from 2 volumes of the Dutch Journal of Medicine (Nederlands Tijdschrift voor Geneeskunde; NTvG) and the (also Dutch) Pharmaceutical Weekly (Pharmaceutisch Weekblad; PW). The validity of the advertisements (n = 54) and the methodological quality of the referenced RCTs (n = 150) were independently scored by 250 medical students using 2 standardised questionnaires. The advertisements' sources were concealed from the students. Per journal, the percentage of drug-promoting advertisements having a valid claim and the percentage of high-quality RCT references were determined. Average scores on quality and validity were compared between the 2 journals. On a scale of 0-18 points, the mean quality scores of the RCTs differed 0.3 (95% CI: -0.1-0.7) between the NTvG (score: 14.8; SD: 2.2) and the PW (score: 14.5; SD: 2.6). The difference between the validity scores of drug-promoting advertisements in the NTvG (score: 5.8; SD: 3.3) and the PW (score: 5.6; SD: 3.6) was 0.3 (95% CI: -0.3-0.9) on a scale of 0-10 points. For both journals, an average of 15% of drug-promoting advertisements was valid (defined as a validity score of > 8 points); 35% of the RCTs referred to was of good methodological quality (defined as a quality score of > 16 points). The substantiation of many claims in drug-promoting advertisements in the NTvG and the PW was mediocre. There was no difference between the 2 journals.

  17. Developmental validation of a Nextera XT mitogenome Illumina MiSeq sequencing method for high-quality samples.

    PubMed

    Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla

    2018-05-01

    Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Identification and validation of reference genes for normalization of gene expression analysis using qRT-PCR in Helicoverpa armigera (Lepidoptera: Noctuidae).

    PubMed

    Zhang, Songdou; An, Shiheng; Li, Zhen; Wu, Fengming; Yang, Qingpo; Liu, Yichen; Cao, Jinjun; Zhang, Huaijiang; Zhang, Qingwen; Liu, Xiaoxia

    2015-01-25

    Recent studies have focused on determining functional genes and microRNAs in the pest Helicoverpa armigera (Lepidoptera: Noctuidae). Most of these studies used quantitative real-time PCR (qRT-PCR). Suitable reference genes are necessary to normalize gene expression data of qRT-PCR. However, a comprehensive study on the reference genes in H. armigera remains lacking. Twelve candidate reference genes of H. armigera were selected and evaluated for their expression stability under different biotic and abiotic conditions. The comprehensive stability ranking of candidate reference genes was recommended by RefFinder and the optimal number of reference genes was calculated by geNorm. Two target genes, thioredoxin (TRX) and Cu/Zn superoxide dismutase (SOD), were used to validate the selection of reference genes. Results showed that the most suitable candidate combinations of reference genes were as follows: 28S and RPS15 for developmental stages; RPS15 and RPL13 for larvae tissues; EF and RPL27 for adult tissues; GAPDH, RPL27, and β-TUB for nuclear polyhedrosis virus infection; RPS15 and RPL32 for insecticide treatment; RPS15 and RPL27 for temperature treatment; and RPL32, RPS15, and RPL27 for all samples. This study not only establishes an accurate method for normalizing qRT-PCR data in H. armigera but also serve as a reference for further study on gene transcription in H. armigera and other insects. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Regional Feedstock Partnership Summary Report: Enabling the Billion-Ton Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, Vance N.; Karlen, Douglas L.; Lacey, Jeffrey A.

    2016-07-12

    The U.S. Department of Energy (DOE) and the Sun Grant Initiative established the Regional Feedstock Partnership (referred to as the Partnership) to address information gaps associated with enabling the vision of a sustainable, reliable, billion-ton U.S. bioenergy industry by the year 2030 (i.e., the Billion-Ton Vision). Over the past 7 years (2008–2014), the Partnership has been successful at advancing the biomass feedstock production industry in the United States, with notable accomplishments. The Billion-Ton Study identifies the technical potential to expand domestic biomass production to offset up to 30% of U.S. petroleum consumption, while continuing to meet demands for food, feed,more » fiber, and export. This study verifies for the biofuels and chemical industries that a real and substantial resource base could justify the significant investment needed to develop robust conversion technologies and commercial-scale facilities. DOE and the Sun Grant Initiative established the Partnership to demonstrate and validate the underlying assumptions underpinning the Billion-Ton Vision to supply a sustainable and reliable source of lignocellulosic feedstock to a large-scale bioenergy industry. This report discusses the accomplishments of the Partnership, with references to accompanying scientific publications. These accomplishments include advances in sustainable feedstock production, feedstock yield, yield stability and stand persistence, energy crop commercialization readiness, information transfer, assessment of the economic impacts of achieving the Billion-Ton Vision, and the impact of feedstock species and environment conditions on feedstock quality characteristics.« less

  20. Definition and Classification of Generic Drugs Across the World.

    PubMed

    Alfonso-Cristancho, Rafael; Andia, Tatiana; Barbosa, Tatiana; Watanabe, Jonathan H

    2015-08-01

    Our aim was to systematically identify and compare how generic medications, as defined by the US Food and Drug Administration (FDA), World Health Organization (WHO), and European Medicines Agency (EMA), are classified and defined by regulatory agencies around the world. We focused on emerging markets and selected the most populated countries in each of the WHO regions: Africa, the Americas, Eastern Mediterranean, Europe, Southeast Asia, and Western Pacific. A structured review of published literature was performed through December 2013. Direct information from regulatory agencies and Ministries of Health for each country was extracted. Additionally, key informant interviews were performed for validation. Of the 21 countries selected, approximately half provided an official country-level definition for generic pharmaceuticals. The others did not have any definition or referred to the WHO. Only two-thirds of the countries had specific requirements for generic pharmaceuticals, often associated with clinical interchangeability. Most countries with requirements mention bioequivalence, but few required bioavailability studies explicitly. Over 30% of the countries had other terms associated with generics in their definitions and processes. In countries with generic drug policies, there is reference to patent and/or data protection during the drug registration process. Several countries do not mention good manufacturing practices as part of the evaluation process. Countries in Africa and Eastern Mediterranean regions appear to have a less developed regulatory framework. In summary, there is significant variability in the definition and classification of generic drugs in emerging markets. Standardization of the definitions is necessary to make international comparisons viable.

  1. Reference-based source separation method for identification of brain regions involved in a reference state from intracerebral EEG

    PubMed Central

    Samadi, Samareh; Amini, Ladan; Cosandier-Rimélé, Delphine; Soltanian-Zadeh, Hamid; Jutten, Christian

    2013-01-01

    In this paper, we present a fast method to extract the sources related to interictal epileptiform state. The method is based on general eigenvalue decomposition using two correlation matrices during: 1) periods including interictal epileptiform discharges (IED) as a reference activation model and 2) periods excluding IEDs or abnormal physiological signals as background activity. After extracting the most similar sources to the reference or IED state, IED regions are estimated by using multiobjective optimization. The method is evaluated using both realistic simulated data and actual intracerebral electroencephalography recordings of patients suffering from focal epilepsy. These patients are seizure-free after the resective surgery. Quantitative comparisons of the proposed IED regions with the visually inspected ictal onset zones by the epileptologist and another method of identification of IED regions reveal good performance. PMID:23428609

  2. The learning continuum based on student's level of competence and specific pedagogical learning material on physiological aspects from teachers's opinions

    NASA Astrophysics Data System (ADS)

    Hadi, Ria Fitriyani; Subali, Bambang

    2017-08-01

    The scope of learning continuum at the conceptual knowledge is formulated based on the student's level of competence and specific pedagogical learning material. The purpose of this study is to develop a learning continuum of specific pedagogical material aspects of physiology targeted for students in primary and secondary education. This research was conducted in Province of Yogyakarta Special Region from October 2016 to January 2017. The method used in this study was survey method. The data were collected using questionnaire that had been validated from the aspects of construct validity and experts judgements. Respondents in this study consist of 281 Science/Biology teachers at Public Junior and Senior High Schools in the Province of Yogyakarta Special Region which spread in Yogyakarta city and 4 regencies namely Sleman, Bantul, Kulonprogo, and Gunungkidul. The data were taken using a census. Data were analyzed using a descriptive analysis technique. The results show the learning continuum of physiology based on teachers's opinion from grade VII, VIII, and IX are taught in grade VII, VIII, IX and X on level of C2 (understanding) and the learning continuum of physiology based on teachers's opinion from grade X, XI and XII are taught in grade X and XI on level of C2 (understanding), C3 (applying), and C4 (analyzing) based on teachers's opinions. The conclusion is that many teachers refer to the existing curriculum rather than their own original idea for developing learning continuum.

  3. Positive predictive value of cardiac examination, procedure and surgery codes in the Danish National Patient Registry: a population-based validation study

    PubMed Central

    Adelborg, Kasper; Sundbøll, Jens; Munch, Troels; Frøslev, Trine; Sørensen, Henrik Toft; Bøtker, Hans Erik; Schmidt, Morten

    2016-01-01

    Objective Danish medical registries are widely used for cardiovascular research, but little is known about the data quality of cardiac interventions. We computed positive predictive values (PPVs) of codes for cardiac examinations, procedures and surgeries registered in the Danish National Patient Registry during 2010–2012. Design Population-based validation study. Setting We randomly sampled patients from 1 university hospital and 2 regional hospitals in the Central Denmark Region. Participants 1239 patients undergoing different cardiac interventions. Main outcome measure PPVs with medical record review as reference standard. Results A total of 1233 medical records (99% of the total sample) were available for review. PPVs ranged from 83% to 100%. For examinations, the PPV was overall 98%, reflecting PPVs of 97% for echocardiography, 97% for right heart catheterisation and 100% for coronary angiogram. For procedures, the PPV was 98% overall, with PPVs of 98% for thrombolysis, 92% for cardioversion, 100% for radiofrequency ablation, 98% for percutaneous coronary intervention, and 100% for both cardiac pacemakers and implantable cardiac defibrillators. For cardiac surgery, the overall PPVs was 99%, encompassing PPVs of 100% for mitral valve surgery, 99% for aortic valve surgery, 98% for coronary artery bypass graft surgery, and 100% for heart transplantation. The accuracy of coding was consistent within age, sex, and calendar year categories, and the agreement between independent reviewers was high (99%). Conclusions Cardiac examinations, procedures and surgeries have high PPVs in the Danish National Patient Registry. PMID:27940630

  4. Operational skill assessment of the IBI-MFC Ocean Forecasting System within the frame of the CMEMS.

    NASA Astrophysics Data System (ADS)

    Lorente Jimenez, Pablo; Garcia-Sotillo, Marcos; Amo-Balandron, Arancha; Aznar Lecocq, Roland; Perez Gomez, Begoña; Levier, Bruno; Alvarez-Fanjul, Enrique

    2016-04-01

    Since operational ocean forecasting systems (OOFSs) are increasingly used as tools to support high-stakes decision-making for coastal management, a rigorous skill assessment of model performance becomes essential. In this context, the IBI-MFC (Iberia-Biscay-Ireland Monitoring & Forecasting Centre) has been providing daily ocean model estimates and forecasts for the IBI regional seas since 2011, first in the frame of MyOcean projects and later as part of the Copernicus Marine Environment Monitoring Service (CMEMS). A comprehensive web validation tool named NARVAL (North Atlantic Regional VALidation) has been developed to routinely monitor IBI performance and to evaluate model's veracity and prognostic capabilities. Three-dimensional comparisons are carried out on a different time basis ('online mode' - daily verifications - and 'delayed mode' - for longer time periods -) using a broad variety of in-situ (buoys, tide-gauges, ARGO-floats, drifters and gliders) and remote-sensing (satellite and HF radars) observational sources as reference fields to validate against the NEMO model solution. Product quality indicators and meaningful skill metrics are automatically computed not only averaged over the entire IBI domain but also over specific sub-regions of particular interest from a user perspective (i.e. coastal or shelf areas) in order to determine IBI spatial and temporal uncertainty levels. A complementary aspect of NARVAL web tool is the intercomparison of different CMEMS forecast model solutions in overlapping areas. Noticeable efforts are in progress in order to quantitatively assess the quality and consistency of nested system outputs by setting up specific intercomparison exercises on different temporal and spatial scales, encompassing global configurations (CMEMS Global system), regional applications (NWS and MED ones) and local high-resolution coastal models (i.e. the PdE SAMPA system in the Gibraltar Strait). NARVAL constitutes a powerful approach to increase our knowledge on the IBI-MFC forecast system and aids us to inform CMEMS end users about the provided ocean forecasting products' confidence level by routinely delivering QUality Information Documents (QUIDs). It allows the detection of strengths and weaknesses in the modeling of several key physical processes and the understanding of potential sources of discrepancies in IBI predictions. Once the numerical model shortcomings are identified, potential improvements can be achieved thanks to reliable upgrades, making evolve IBI OOFS towards more refined and advanced versions.

  5. Selection and validation of suitable reference genes for miRNA expression normalization by quantitative RT-PCR in citrus somatic embryogenic and adult tissues.

    PubMed

    Kou, Shu-Jun; Wu, Xiao-Meng; Liu, Zheng; Liu, Yuan-Long; Xu, Qiang; Guo, Wen-Wu

    2012-12-01

    miRNAs have recently been reported to modulate somatic embryogenesis (SE), a key pathway of plant regeneration in vitro. For expression level detection and subsequent function dissection of miRNAs in certain biological processes, qRT-PCR is one of the most effective and sensitive techniques, for which suitable reference gene selection is a prerequisite. In this study, three miRNAs and eight non-coding RNAs (ncRNA) were selected as reference candidates, and their expression stability was inspected in developing citrus SE tissues cultured at 20, 25, and 30 °C. Stability of the eight non-miRNA ncRNAs was further validated in five adult tissues without temperature treatment. The best single reference gene for SE tissues was snoR14 or snoRD25, while for the adult tissues the best one was U4; although they were not as stable as the optimal multiple references snoR14 + U6 for SE tissues and snoR14 + U5 for adult tissues. For expression normalization of less abundant miRNAs in SE tissues, miR3954 was assessed as a viable reference. Single reference gene snoR14 outperformed multiple references for the overall SE and adult tissues. As one of the pioneer systematic studies on reference gene identification for plant miRNA normalization, this study benefits future exploration on miRNA function in citrus and provides valuable information for similar studies in other higher plants. Three miRNAs and eight non-coding RNAs were tested as reference candidates on developing citrus SE tissues. Best single references snoR14 or snoRD25 and optimal multiple references snoR14 + U6, snoR14 + U5 were identified.

  6. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  7. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  8. Validating Large Scale Networks Using Temporary Local Scale Networks

    USDA-ARS?s Scientific Manuscript database

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  9. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    NASA Astrophysics Data System (ADS)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six standard and in-house databases. With reference to deep learning, our algorithm exhibits comparable performance. More important is that it has significant lower requirements in terms of compute power and memory, thus making it more relevant for depolyment in resource constrained platforms with significant size, weight and power constraints.

  10. MODIS Tree Cover Validation for the Circumpolar Taiga-Tundra Transition Zone

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Nelson, R.; Sun, G.; Margolis, H.; Kerber, A.; Ranson, K. J.

    2009-01-01

    A validation of the 2005 500m MODIS vegetation continuous fields (VCF) tree cover product in the circumpolar taiga-tundra ecotone was performed using high resolution Quickbird imagery. Assessing the VCF's performance near the northern limits of the boreal forest can help quantify the accuracy of the product within this vegetation transition area. The circumpolar region was divided into longitudinal zones and validation sites were selected in areas of varying tree cover where Quickbird imagery is available in Google Earth. Each site was linked to the corresponding VCF pixel and overlaid with a regular dot grid within the VCF pixel's boundary to estimate percent tree crown cover in the area. Percent tree crown cover was estimated using Quickbird imagery for 396 sites throughout the circumpolar region and related to the VCF's estimates of canopy cover for 2000-2005. Regression results of VCF inter-annual comparisons (2000-2005) and VCF-Quickbird image-interpreted estimates indicate that: (1) Pixel-level, inter-annual comparisons of VCF estimates of percent canopy cover were linearly related (mean R(sup 2) = 0.77) and exhibited an average root mean square error (RMSE) of 10.1 % and an average root mean square difference (RMSD) of 7.3%. (2) A comparison of image-interpreted percent tree crown cover estimates based on dot counts on Quickbird color images by two different interpreters were more variable (R(sup 2) = 0.73, RMSE = 14.8%, RMSD = 18.7%) than VCF inter-annual comparisons. (3) Across the circumpolar boreal region, 2005 VCF-Quickbird comparisons were linearly related, with an R(sup 2) = 0.57, a RMSE = 13.4% and a RMSD = 21.3%, with a tendency to over-estimate areas of low percent tree cover and anomalous VCF results in Scandinavia. The relationship of the VCF estimates and ground reference indicate to potential users that the VCF's tree cover values for individual pixels, particularly those below 20% tree cover, may not be precise enough to monitor 500m pixel-level tree cover in the taiga-tundra transition zone.

  11. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  12. Validation of streamflow measurements made with acoustic doppler current profilers

    USGS Publications Warehouse

    Oberg, K.; Mueller, D.S.

    2007-01-01

    The U.S. Geological Survey and other international agencies have collaborated to conduct laboratory and field validations of acoustic Doppler current profiler (ADCP) measurements of streamflow. Laboratory validations made in a large towing basin show that the mean differences between tow cart velocity and ADCP bottom-track and water-track velocities were -0.51 and -1.10%, respectively. Field validations of commercially available ADCPs were conducted by comparing streamflow measurements made with ADCPs to reference streamflow measurements obtained from concurrent mechanical current-meter measurements, stable rating curves, salt-dilution measurements, or acoustic velocity meters. Data from 1,032 transects, comprising 100 discharge measurements, were analyzed from 22 sites in the United States, Canada, Sweden, and The Netherlands. Results of these analyses show that broadband ADCP streamflow measurements are unbiased when compared to the reference discharges regardless of the water mode used for making the measurement. Measurement duration is more important than the number of transects for reducing the uncertainty of the ADCP streamflow measurement. ?? 2007 ASCE.

  13. The Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ-12): validation of the Dutch version.

    PubMed

    't Hoen, Lisette A; Utomo, Elaine; Steensma, Anneke B; Blok, Bertil F M; Korfage, Ida J

    2015-09-01

    To establish the reliability and validity of the Dutch version of the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ-12) in women with pelvic floor dysfunction. The PISQ-12 was translated into Dutch following a standardized translation process. A group of 124 women involved in a heterosexual relationship who had had symptoms of urinary incontinence, fecal incontinence and/or pelvic organ prolapse for at least 3 months were eligible for inclusion. A reference group was used for assessment of discriminative ability. Data were analyzed for internal consistency, reproducibility, construct validity, responsiveness, and interpretability. An alteration was made to item 12 and was corrected for during the analysis. The patient group comprised 70 of the 124 eligible women, and the reference group comprised 208 women from a panel representative of the Dutch female population. The Dutch PISQ-12 showed an adequate internal consistency with a Cronbach's alpha of 0.57 - 0.69, increasing with correction for item 12 to 0.69 - 0.75, for the reference and patient group, respectively. Scores in the patient group were lower (32.6 ± 6.9) than in the reference group (36.3 ± 4.8; p = 0.0001), indicating a lower sexual function in the patient group and good discriminative ability. Reproducibility was excellent with an intraclass correlation coefficient for agreement of 0.93 (0.88 - 0.96). A positive correlation was found with the Short Form-12 Health Survey (SF-12) measure representing good criterion validity. Due to the small number of patients who had received treatment at the 6-month follow-up, no significant responsiveness could be established. This study showed that the Dutch version of the PISQ-12 has good validity and reliability. The PISQ-12 will enable Dutch physicians to evaluate sexual dysfunction in women with pelvic floor disorders.

  14. Validation of Reference Genes in mRNA Expression Analysis Applied to the Study of Asthma.

    PubMed

    Segundo-Val, Ignacio San; Sanz-Lozano, Catalina S

    2016-01-01

    The quantitative Polymerase Chain Reaction is the most used technique for the study of gene expression. To correct putative experimental errors of this technique is necessary normalizing the expression results of the gene of interest with the obtained for reference genes. Here, we describe an example of the process to select reference genes. In this particular case, we select reference genes for expression studies in the peripheral blood mononuclear cells of asthmatic patients.

  15. Short communication: Development of an equation for estimating methane emissions of dairy cows from milk Fourier transform mid-infrared spectra by using reference data obtained exclusively from respiration chambers.

    PubMed

    Vanlierde, A; Soyeurt, H; Gengler, N; Colinet, F G; Froidmont, E; Kreuzer, M; Grandl, F; Bell, M; Lund, P; Olijhoek, D W; Eugène, M; Martin, C; Kuhla, B; Dehareng, F

    2018-05-09

    Evaluation and mitigation of enteric methane (CH 4 ) emissions from ruminant livestock, in particular from dairy cows, have acquired global importance for sustainable, climate-smart cattle production. Based on CH 4 reference measurements obtained with the SF 6 tracer technique to determine ruminal CH 4 production, a current equation permits evaluation of individual daily CH 4 emissions of dairy cows based on milk Fourier transform mid-infrared (FT-MIR) spectra. However, the respiration chamber (RC) technique is considered to be more accurate than SF 6 to measure CH 4 production from cattle. This study aimed to develop an equation that allows estimating CH 4 emissions of lactating cows recorded in an RC from corresponding milk FT-MIR spectra and to challenge its robustness and relevance through validation processes and its application on a milk spectral database. This would permit confirming the conclusions drawn with the existing equation based on SF 6 reference measurements regarding the potential to estimate daily CH 4 emissions of dairy cows from milk FT-MIR spectra. A total of 584 RC reference CH 4 measurements (mean ± standard deviation of 400 ± 72 g of CH 4 /d) and corresponding standardized milk mid-infrared spectra were obtained from 148 individual lactating cows between 7 and 321 d in milk in 5 European countries (Germany, Switzerland, Denmark, France, and Northern Ireland). The developed equation based on RC measurements showed calibration and cross-validation coefficients of determination of 0.65 and 0.57, respectively, which is lower than those obtained earlier by the equation based on 532 SF 6 measurements (0.74 and 0.70, respectively). This means that the RC-based model is unable to explain the variability observed in the corresponding reference data as well as the SF 6 -based model. The standard errors of calibration and cross-validation were lower for the RC model (43 and 47 g/d vs. 66 and 70 g/d for the SF 6 version, respectively), indicating that the model based on RC data was closer to actual values. The root mean squared error (RMSE) of calibration of 42 g/d represents only 10% of the overall daily CH 4 production, which is 23 g/d lower than the RMSE for the SF 6 -based equation. During the external validation step an RMSE of 62 g/d was observed. When the RC equation was applied to a standardized spectral database of milk recordings collected in the Walloon region of Belgium between January 2012 and December 2017 (1,515,137 spectra from 132,658 lactating cows in 1,176 different herds), an average ± standard deviation of 446 ± 51 g of CH 4 /d was estimated, which is consistent with the range of the values measured using both RC and SF 6 techniques. This study confirmed that milk FT-MIR spectra could be used as a potential proxy to estimate daily CH 4 emissions from dairy cows provided that the variability to predict is covered by the model. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  16. Estimation of the spatial validity of local aerosol measurements in Europe using MODIS data

    NASA Astrophysics Data System (ADS)

    Marcos, Carlos; Gómez-Amo, J. Luis; Pedrós, Roberto; Utrillas, M. Pilar; Martínez-Lozano, J. Antonio

    2013-04-01

    The actual impact of atmospheric aerosols in the Earth's radiative budget is still associated to large uncertainties [IPCC, 2007]. Global monitoring of the aerosol properties and distribution in the atmosphere is needed to improve our knowledge of climate change. The instrumentation used for this purpose can be divided into two main groups: ground-based and satellite-based. Ground-based instruments, like lidars or Sun-photometers, are usually designed to measure accurate local properties of atmospheric aerosols throughout the day. However, the spatial validity of these measurements is conditioned by the aerosol variability within the atmosphere. Satellite-based sensors offer spatially resolved information about aerosols at a global scale, but generally with a worse temporal resolution and in a less detailed way. In this work, the aerosol optical depth (AOD) at 550nm from MODIS Aqua, product MYD04 [Remer, 2005], is used to estimate the area of validity of local measurements at different reference points, corresponding to the AERONET [Holben, 1998] stations during the 2011-2012 period in Europe. For each case, the local AOD (AODloc) at each reference point is calculated as the averaged MODIS data within a radius of 15 km. Then, the AODloc is compared to the AOD obtained when a larger averaging radius is used (AOD(r)), up to 500 km. Only those cases where more than 50% of the pixels in each averaging area contain valid data are used. Four factors that could affect the spatial variability of aerosols are studied: proximity to the sea, human activity, aerosol load and geographical location (latitude and longitude). For the 76 reference points studied, which are sited in different regions of Europe, we have determined that the root mean squared difference (RMSD) between AODloc and AOD(r) , averaged for all cases, increases in a logarithmic way with the averaging radius (RMSD ? log(r)), while the linear correlation coefficient (R) decreases following a logarithmic trend (R ? -log(r)). Among all the factors studied, the aerosol load is the most influential one in the aerosol spatial variability: for averaging radii smaller than 40 km, the RMSD increases with AODloc. Another important factor is the latitude and longitude: the variation of the RMSD in the AOD with regard to the averaging radius can differ up to a 60%, depending on the location. On the contray, the proximity to the sea and the amount of population surrounding each reference point do not have a noticeable influence compared to the above mentioned factors. Holben, B. N., Eck, T. F., Slutsker, I., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y., Nakajima, T., Lavenu, F., and Smirnov, A.: AERONET - A federated instrument network and data archive for aerosol characterization, Remote Sens. Environ., 66, 1-16, 1998. IPCC (2007). S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, H.L. Miller (Eds.), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK & New York, USA. Remer, L. A., y co-authors, 2005: The MODIS aerosol algorithm, products, and validation. J. Atmos. Sci., 62, 947-973. doi: http://dx.doi.org/10.1175/JAS3385.1

  17. Meta-modeling soil organic carbon sequestration potential and its application at regional scale.

    PubMed

    Luo, Zhongkui; Wang, Enli; Bryan, Brett A; King, Darran; Zhao, Gang; Pan, Xubin; Bende-Michl, Ulrike

    2013-03-01

    Upscaling the results from process-based soil-plant models to assess regional soil organic carbon (SOC) change and sequestration potential is a great challenge due to the lack of detailed spatial information, particularly soil properties. Meta-modeling can be used to simplify and summarize process-based models and significantly reduce the demand for input data and thus could be easily applied on regional scales. We used the pre-validated Agricultural Production Systems sIMulator (APSIM) to simulate the impact of climate, soil, and management on SOC at 613 reference sites across Australia's cereal-growing regions under a continuous wheat system. We then developed a simple meta-model to link the APSIM-modeled SOC change to primary drivers, i.e., the amount of recalcitrant SOC, plant available water capacity of soil, soil pH, and solar radiation, temperature, and rainfall in the growing season. Based on high-resolution soil texture data and 8165 climate data points across the study area, we used the meta-model to assess SOC sequestration potential and the uncertainty associated with the variability of soil characteristics. The meta-model explained 74% of the variation of final SOC content as simulated by APSIM. Applying the meta-model to Australia's cereal-growing regions reveals regional patterns in SOC, with higher SOC stock in cool, wet regions. Overall, the potential SOC stock ranged from 21.14 to 152.71 Mg/ha with a mean of 52.18 Mg/ha. Variation of soil properties induced uncertainty ranging from 12% to 117% with higher uncertainty in warm, wet regions. In general, soils in Australia's cereal-growing regions under continuous wheat production were simulated as a sink of atmospheric carbon dioxide with a mean sequestration potential of 8.17 Mg/ha.

  18. Stable region for maxillary dental cast superimposition in adults, studied with the aid of stable miniscrews.

    PubMed

    Chen, G; Chen, S; Zhang, X Y; Jiang, R P; Liu, Y; Shi, F H; Xu, T M

    2011-05-01

    To identify a stable and reproducible reference region to superimpose serial maxillary dental models in adult extraction cases. Fifteen adult volunteers were enrolled. To reduce protrusion, bilateral maxillary first premolars were extracted in all volunteers. Each volunteer received six miniscrews, including two loaded miniscrews used to retract anterior teeth and four unloaded miniscrews. Impressions for maxillary models were taken at T1 (1 week after miniscrew placement) and T2 (17 months later). Dental models were created and then scanned using a laser scanner. Stability of the miniscrews was evaluated, and dental models were registered using stationary miniscrews. The palatal region, where deviation was within 0.5 mm in all subjects, was determined to be the stable region. Reproducibility of the new palatal region for 3D digital model superimposition was evaluated. Deviation of the medial 2/3 of the palatal region between the third rugae and the line in contact with the distal surface of the bilateral maxillary first molars was within 0.5 mm. Tooth movement of 15 subjects was measured to evaluate the validity of the new 3D superimposition method. Displacements were 8.18 ± 2.94 mm (central incisor) and 2.25 ± 0.73 mm (first molar) measured by miniscrew superimposition, while values of 7.81 ± 2.53 mm (central incisor) and 2.29 ± 1.03 mm (first molar) were measured using the 3D palatal vault regional superimposition method; no significant difference was observed. The medial 2/3 of the third rugae and the regional palatal vault dorsal to it is a stable region to register 3D digital models for evaluation of orthodontic tooth movement in adult patients. © 2011 John Wiley & Sons A/S.

  19. Simulation verification techniques study: Simulation performance validation techniques document. [for the space shuttle system

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1975-01-01

    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.

  20. Quadruple inversion-recovery b-SSFP MRA of the abdomen: initial clinical validation.

    PubMed

    Atanasova, Iliyana P; Lim, Ruth P; Chandarana, Hersh; Storey, Pippa; Bruno, Mary T; Kim, Daniel; Lee, Vivian S

    2014-09-01

    The purpose of this study is to assess the image quality and diagnostic accuracy of non-contrast quadruple inversion-recovery balanced-SSFP MRA (QIR MRA) for detection of aortoiliac disease in a clinical population. QIR MRA was performed in 26 patients referred for routine clinical gadolinium-enhanced MRA (Gd-MRA) for known or suspected aortoiliac disease. Non-contrast images were independently evaluated for image quality and degree of stenosis by two radiologists, using consensus Gd-MRA as the reference standard. Hemodynamically significant stenosis (≥50%) was found in 10% (22/226) of all evaluable segments on Gd-MRA. The sensitivity and specificity for stenosis evaluation by QIR MRA for the two readers were 86%/86% and 95%/93% respectively. Negative predictive value and positive predictive value were 98%/98% and 63%/53% respectively. For stenosis evaluation of the aortoiliac region QIR MRA showed good agreement with the reference standard with high negative predictive value and a tendency to overestimate mild disease presumably due to the flow-dependence of the technique. QIR MRA could be a reasonable alternative to Gd-MRA for ruling out stenosis when contrast is contraindicated due to impaired kidney function or in patients who undergo abdominal MRA for screening purposes. Further work is necessary to improve performance and justify routine clinical use. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Pharmacophore-Based Similarity Scoring for DOCK

    PubMed Central

    2015-01-01

    Pharmacophore modeling incorporates geometric and chemical features of known inhibitors and/or targeted binding sites to rationally identify and design new drug leads. In this study, we have encoded a three-dimensional pharmacophore matching similarity (FMS) scoring function into the structure-based design program DOCK. Validation and characterization of the method are presented through pose reproduction, crossdocking, and enrichment studies. When used alone, FMS scoring dramatically improves pose reproduction success to 93.5% (∼20% increase) and reduces sampling failures to 3.7% (∼6% drop) compared to the standard energy score (SGE) across 1043 protein–ligand complexes. The combined FMS+SGE function further improves success to 98.3%. Crossdocking experiments using FMS and FMS+SGE scoring, for six diverse protein families, similarly showed improvements in success, provided proper pharmacophore references are employed. For enrichment, incorporating pharmacophores during sampling and scoring, in most cases, also yield improved outcomes when docking and rank-ordering libraries of known actives and decoys to 15 systems. Retrospective analyses of virtual screenings to three clinical drug targets (EGFR, IGF-1R, and HIVgp41) using X-ray structures of known inhibitors as pharmacophore references are also reported, including a customized FMS scoring protocol to bias on selected regions in the reference. Overall, the results and fundamental insights gained from this study should benefit the docking community in general, particularly researchers using the new FMS method to guide computational drug discovery with DOCK. PMID:25229837

  2. On the consistency of tomographically imaged lower mantle slabs

    NASA Astrophysics Data System (ADS)

    Shephard, Grace E.; Matthews, Kara J.; Hosseini, Kasra; Domeier, Mathew

    2017-04-01

    Over the last few decades numerous seismic tomography models have been published, each constructed with choices of data input, parameterization and reference model. The broader geoscience community is increasingly utilizing these models, or a selection thereof, to interpret Earth's mantle structure and processes. It follows that seismically identified remnants of subducted slabs have been used to validate, test or refine relative plate motions, absolute plate reference frames, and mantle sinking rates. With an increasing number of models to include, or exclude, the question arises - how robust is a given positive seismic anomaly, inferred to be a slab, across a given suite of tomography models? Here we generate a series of "vote maps" for the lower mantle by comparing 14 seismic tomography models, including 7 s-wave and 7 p-wave. Considerations include the retention or removal of the mean, the use of a consistent or variable reference model, the statistical value which defines the slab "contour", and the effect of depth interpolation. Preliminary results will be presented that address the depth, location and degree of agreement between seismic tomography models, both for the 14 combined, and between the p-waves and s-waves. The analysis also permits a broader discussion of slab volumes and subduction flux. And whilst the location and geometry of slabs, matches some the documented regions of long-lived subduction, other features do not, illustrating the importance of a robust approach to slab identification.

  3. A satellite relative motion model including J_2 and J_3 via Vinti's intermediary

    NASA Astrophysics Data System (ADS)

    Biria, Ashley D.; Russell, Ryan P.

    2018-03-01

    Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.

  4. Geometry and symmetry in non-equilibrium thermodynamic systems

    NASA Astrophysics Data System (ADS)

    Sonnino, Giorgio

    2017-06-01

    The ultimate aim of this series of works is to establish the closure equations, valid for thermodynamic systems out from the Onsager region, and to describe the geometry and symmetry in thermodynamic systems far from equilibrium. Geometry of a non-equilibrium thermodynamic system is constructed by taking into account the second law of thermodynamics and by imposing the validity of the Glansdorff-Prigogine Universal Criterion of Evolution. These two constraints allow introducing the metrics and the affine connection of the Space of the Thermodynamic Forces, respectively. The Lie group associated to the nonlinear Thermodynamic Coordinate Transformations (TCT) leaving invariant both the entropy production σ and the Glansdorff-Prigogine dissipative quantity P, is also described. The invariance under TCT leads to the formulation of the Thermodynamic Covariance Principle (TCP): The nonlinear closure equations, i.e. the flux-force relations, must be covariant under TCT. In other terms, the fundamental laws of thermodynamics should be manifestly covariant under transformations between the admissible thermodynamic forces (i.e. under TCT). The symmetry properties of a physical system are intimately related to the conservation laws characterizing the thermodynamic system. Noether's theorem gives a precise description of this relation. The macroscopic theory for closure relations, based on this geometrical description and subject to the TCP, is referred to as the Thermodynamic Field Theory (TFT). This theory ensures the validity of the fundamental theorems for systems far from equilibrium.

  5. Retrieval and Validation of Zenith and Slant Path Delays From the Irish GPS Network

    NASA Astrophysics Data System (ADS)

    Hanafin, Jennifer; Jennings, S. Gerard; O'Dowd, Colin; McGrath, Ray; Whelan, Eoin

    2010-05-01

    Retrieval of atmospheric integrated water vapour (IWV) from ground-based GPS receivers and provision of this data product for meteorological applications has been the focus of a number of Europe-wide networks and projects, most recently the EUMETNET GPS water vapour programme. The results presented here are from a project to provide such information about the state of the atmosphere around Ireland for climate monitoring and improved numerical weather prediction. Two geodetic reference GPS receivers have been deployed at Valentia Observatory in Co. Kerry and Mace Head Atmospheric Research Station in Co. Galway, Ireland. These two receivers supplement the existing Ordnance Survey Ireland active network of 17 permanent ground-based receivers. A system to retrieve column-integrated atmospheric water vapour from the data provided by this network has been developed, based on the GPS Analysis at MIT (GAMIT) software package. The data quality of the zenith retrievals has been assessed using co-located radiosondes at the Valentia site and observations from a microwave profiling radiometer at the Mace Head site. Validation of the slant path retrievals requires a numerical weather prediction model and HIRLAM (High-Resolution Limited Area Model) version 7.2, the current operational forecast model in use at Met Éireann for the region, has been used for this validation work. Results from the data processing and comparisons with the independent observations and model will be presented.

  6. Modeling and analysis of PET studies with norepinephrine transporter ligands: the search for a reference region.

    PubMed

    Logan, Jean; Ding, Yu-Shin; Lin, Kuo-Shyan; Pareto, Deborah; Fowler, Joanna; Biegon, Anat

    2005-07-01

    The development of positron emission tomography (PET) ligands for the norepinephrine transporter (NET) has been slow compared to the development of radiotracers for others systems, such as the dopamine (DAT) or the serotonin transporters (SERT). The main reason for this appears to be the high nonspecific (non-NET) binding exhibited by many of these tracers, which makes the identification of a reference region difficult. With other PET ligands the use of a reference region increases the reproducibility of the outcome measure in test/retest studies. The focus of this work is to identify a suitable reference region or means of normalizing data for the NET ligands investigated. We have analyzed the results of PET studies in the baboon brain with labeled reboxetine derivatives (S,S)-[(11)C]O-methyl reboxetine (SS-MRB), (S,S)-[(18)F]fluororeboxetine (SS-FRB) as well as O-[(11)C]nisoxetine and N-[(11)C]nisoxetine (NIS), and, for comparison, the less active (R,R) enantiomers (RR-MRB, RR-FRB) in terms of the distribution volume (DV) using measured arterial input functions. (1) For a given subject, a large variation in DV for successive baseline studies was observed in regions with both high and low NET density. (2) The occipital cortex and the basal ganglia were found to be the regions with the smallest change between baseline (SS-MRB) and pretreatment with cocaine, and were therefore used as a composite reference region for calculation of a distribution volume ratio (DVR). (3) The variability [as measured by the coefficient of variation (CV) = standard deviation/mean] in the distribution volume ratio (DVR) of thalamus (to reference region) was considerably reduced over that of the DV using this composite reference region. (4) Pretreatment with nisoxetine (1.0 mg/kg 10 min prior to tracer) in one study produced (in decreasing order) reductions in thalamus, cerebellum, cingulate and frontal cortex consistent with known NET densities. (5) [(11)C]Nisoxetine had a higher background non-NET binding (DV) than the other tracers reported here with basal ganglia (a non-NET region) higher than thalamus. The reboxetine derivatives show a lot of promise as tracers for human PET studies of the norepinephrine system. We have identified a strategy for normalizing DVs to a reference region with the understanding that the DVR for these tracers may not be related to the binding potential in the same way as, for example, for the dopamine tracers, since the non-NET binding may differ between the target and nontarget regions. From our baboon studies the average DVR for thalamus (n = 18) for SS-MRB is 1.8; however, the lower limit is most likely less than 1 due to this difference in non-NET binding.

  7. NChina16: A stable geodetic reference frame for geological hazard studies in North China

    NASA Astrophysics Data System (ADS)

    Wang, Guoquan; Bao, Yan; Gan, Weijun; Geng, Jianghui; Xiao, Gengru; Shen, Jack S.

    2018-04-01

    We have developed a stable North China Reference Frame 2016 (NChina16) using five years of continuous GPS observations (2011.8-2016.8) from 12 continuously operating reference stations (CORS) fixed to the North China Craton. Applications of NChina16 in landslide and subsidence studies are illustrated in this article. A method for realizing a regional geodetic reference frame is introduced. The primary result of this study is the seven parameters for transforming Cartesian ECEF (Earth-Centered, Earth-Fixed) coordinates X, Y, and Z from the International GNSS Service Reference Frame 2008 (IGS08) to NChina16. The seven parameters include the epoch that is used to align the regional reference frame to IGS08 and the time derivatives of three translations and three rotations. The GIPSY-OASIS (V6.4) software package was used to obtain the precise point positioning (PPP) daily solutions with respect to IGS08. The frame stability of NChina16 is approximately 0.5 mm/year in both horizontal and vertical directions. This study also developed a regional model for correcting seasonal motions superimposed into the vertical component of the GPS-derived displacement time series. Long-term GPS observations (1999-2016) from five CORS in North China were used to develop the seasonal model. According to this study, the PPP daily solutions with respect to NChina16 could achieve 2-3 mm horizontal accuracy and 4-5 mm vertical accuracy after being modified by the regional model. NChina16 will be critical to study geodynamic problems in North China, such as earthquakes, faulting, subsidence, and landslides. The regional reference frame will be periodically updated every few years to mitigate degradation of the frame with time and be synchronized with the update of IGS reference frame.

  8. 48 CFR 702.170-17 - Automated Directives System.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... “Automated Directives System” (“ADS”) sets forth the Agency's policies and essential procedures, as well as supplementary informational references. It contains six functional series, interim policy updates, valid USAID handbook chapters, a resource library, and a glossary. References to “ADS” throughout this chapter 7 are...

  9. Barriers to Implementing Treatment Integrity Procedures: Survey of Treatment Outcome Researchers

    ERIC Educational Resources Information Center

    Perepletchikova, Francheska; Hilt, Lori M.; Chereji, Elizabeth; Kazdin, Alan E.

    2009-01-01

    Treatment integrity refers to implementing interventions as intended. Treatment integrity is critically important for experimental validity and for drawing valid inferences regarding the relationship between treatment and outcome. Yet, it is rarely adequately addressed in psychotherapy research. The authors examined barriers to treatment integrity…

  10. Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds

    USDA-ARS?s Scientific Manuscript database

    The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...

  11. Semantics and pragmatics of social influence: how affirmations and denials affect beliefs in referent propositions.

    PubMed

    Gruenfeld, D H; Wyer, R S

    1992-01-01

    Ss read either affirmations or denials of target propositions that ostensibly came from either newspapers or reference volumes. Denials of the validity of a proposition that was already assumed to be false increased Ss' beliefs in this proposition. The effect generalized to beliefs in related propositions that could be used to support the target's validity. When denials came from a newspaper, their "boomerang effect" was nearly equal in magnitude to the direct effect of affirming the target proposition's validity. When Ss were asked explicitly to consider the implications of the assertions, however, the impact of denials was eliminated. Affirmations of a target proposition that was already assumed to be true also had a boomerang effect. Results have implications for the effects of both semantic and pragmatic processing of assertions on belief change.

  12. Validation of IRDFF in 252Cf standard and IRDF-2002 reference neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simakov, Stanislav; Capote Noy, Roberto; Greenwood, Lawrence R.

    The results of validation of the latest release of International Reactor Dosimetry and Fusion File, IRDFF-1.03, in the standard 252Cf(s.f.) and reference 235U(nth,f) neutron benchmark fields are presented. The spectrum-averaged cross sections were shown to confirm the recommended spectrum for 252Cf spontaneous fission source; that was not the case for the current recommended spectra for 235U(nth,f). IRDFF was also validated in the spectra of the research reactor facilities ISNF, Sigma-Sigma and YAYOI, which are available in the IRDF- 2002 collection. Before this analysis, the ISFN spectrum was resimulated to remove unphysical oscillations in spectrum. IRDFF-1.03 was shown to reasonably reproducemore » the spectrum-averaged data measured in these fields except for the case of YAYOI.« less

  13. Characterizing functional lung heterogeneity in COPD using reference equations for CT scan-measured lobar volumes.

    PubMed

    Come, Carolyn E; Diaz, Alejandro A; Curran-Everett, Douglas; Muralidhar, Nivedita; Hersh, Craig P; Zach, Jordan A; Schroeder, Joyce; Lynch, David A; Celli, Bartolome; Washko, George R

    2013-06-01

    CT scanning is increasingly used to characterize COPD. Although it is possible to obtain CT scan-measured lung lobe volumes, normal ranges remain unknown. Using COPDGene data, we developed reference equations for lobar volumes at maximal inflation (total lung capacity [TLC]) and relaxed exhalation (approximating functional residual capacity [FRC]). Linear regression was used to develop race-specific (non-Hispanic white [NHW], African American) reference equations for lobar volumes. Covariates included height and sex. Models were developed in a derivation cohort of 469 subjects with normal pulmonary function and validated in 546 similar subjects. These cohorts were combined to produce final prediction equations, which were applied to 2,191 subjects with old GOLD (Global Initiative for Chronic Obstructive Lung Disease) stage II to IV COPD. In the derivation cohort, women had smaller lobar volumes than men. Height positively correlated with lobar volumes. Adjusting for height, NHWs had larger total lung and lobar volumes at TLC than African Americans; at FRC, NHWs only had larger lower lobes. Age and weight had no effect on lobar volumes at TLC but had small effects at FRC. In subjects with COPD at TLC, upper lobes exceeded 100% of predicted values in GOLD II disease; lower lobes were only inflated to this degree in subjects with GOLD IV disease. At FRC, gas trapping was severe irrespective of disease severity and appeared uniform across the lobes. Reference equations for lobar volumes may be useful in assessing regional lung dysfunction and how it changes in response to pharmacologic therapies and surgical or endoscopic lung volume reduction.

  14. Semi-automated mapping of burned areas in semi-arid ecosystems using MODIS time-series imagery

    NASA Astrophysics Data System (ADS)

    Hardtke, Leonardo A.; Blanco, Paula D.; Valle, Héctor F. del; Metternicht, Graciela I.; Sione, Walter F.

    2015-06-01

    Understanding spatial and temporal patterns of burned areas at regional scales, provides a long-term perspective of fire processes and its effects on ecosystems and vegetation recovery patterns, and it is a key factor to design prevention and post-fire restoration plans and strategies. Remote sensing has become the most widely used tool to detect fire affected areas over large tracts of land (e.g., ecosystem, regional and global levels). Standard satellite burned area and active fire products derived from the 500-m Moderate Resolution Imaging Spectroradiometer (MODIS) and the Satellite Pour l'Observation de la Terre (SPOT) are available to this end. However, prior research caution on the use of these global-scale products for regional and sub-regional applications. Consequently, we propose a novel semi-automated algorithm for identification and mapping of burned areas at regional scale. The semi-arid Monte shrublands, a biome covering 240,000 km2 in the western part of Argentina, and exposed to seasonal bushfires was selected as the test area. The algorithm uses a set of the normalized burned ratio index products derived from MODIS time series; using a two-phased cycle, it firstly detects potentially burned pixels while keeping a low commission error (false detection of burned areas), and subsequently labels them as seed patches. Region growing image segmentation algorithms are applied to the seed patches in the second-phase, to define the perimeter of fire affected areas while decreasing omission errors (missing real burned areas). Independently-derived Landsat ETM+ burned-area reference data was used for validation purposes. Additionally, the performance of the adaptive algorithm was assessed against standard global fire products derived from MODIS Aqua and Terra satellites, total burned area (MCD45A1), the active fire algorithm (MOD14); and the L3JRC SPOT VEGETATION 1 km GLOBCARBON products. The correlation between the size of burned areas detected by the global fire products and independently-derived Landsat reference data ranged from R2 = 0.01-0.28, while our algorithm performed showed a stronger correlation coefficient (R2 = 0.96). Our findings confirm prior research calling for caution when using the global fire products locally or regionally.

  15. International collaborative study of the endogenous reference gene, sucrose phosphate synthase (SPS), used for qualitative and quantitative analysis of genetically modified rice.

    PubMed

    Jiang, Lingxi; Yang, Litao; Zhang, Haibo; Guo, Jinchao; Mazzara, Marco; Van den Eede, Guy; Zhang, Dabing

    2009-05-13

    One rice ( Oryza sativa ) gene, sucrose phosphate synthase (SPS), has been proven to be a suitable endogenous reference gene for genetically modified (GM) rice detection in a previous study. Herein are the reported results of an international collaborative ring trial for validation of the SPS gene as an endogenous reference gene and its optimized qualitative and quantitative polymerase chain reaction (PCR) systems. A total of 12 genetically modified organism (GMO) detection laboratories from seven countries participated in the ring trial and returned their results. The validated results confirmed the species specificity of the method through testing 10 plant genomic DNAs, low heterogeneity, and a stable single-copy number of the rice SPS gene among 7 indica varieties and 5 japonica varieties. The SPS qualitative PCR assay was validated with a limit of detection (LOD) of 0.1%, which corresponded to about 230 copies of haploid rice genomic DNA, while the limit of quantification (LOQ) for the quantitative PCR system was about 23 copies of haploid rice genomic DNA, with acceptable PCR efficiency and linearity. Furthermore, the bias between the test and true values of eight blind samples ranged from 5.22 to 26.53%. Thus, we believe that the SPS gene is suitable for use as an endogenous reference gene for the identification and quantification of GM rice and its derivates.

  16. Accurate reconstruction of 3D cardiac geometry from coarsely-sliced MRI.

    PubMed

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Berenfeld, Omer; Snyder, Brett; Boyers, Pamela; Gold, Jeffrey

    2014-02-01

    We present a comprehensive validation analysis to assess the geometric impact of using coarsely-sliced short-axis images to reconstruct patient-specific cardiac geometry. The methods utilize high-resolution diffusion tensor MRI (DTMRI) datasets as reference geometries from which synthesized coarsely-sliced datasets simulating in vivo MRI were produced. 3D models are reconstructed from the coarse data using variational implicit surfaces through a commonly used modeling tool, CardioViz3D. The resulting geometries were then compared to the reference DTMRI models from which they were derived to analyze how well the synthesized geometries approximate the reference anatomy. Averaged over seven hearts, 95% spatial overlap, less than 3% volume variability, and normal-to-surface distance of 0.32 mm was observed between the synthesized myocardial geometries reconstructed from 8 mm sliced images and the reference data. The results provide strong supportive evidence to validate the hypothesis that coarsely-sliced MRI may be used to accurately reconstruct geometric ventricular models. Furthermore, the use of DTMRI for validation of in vivo MRI presents a novel benchmark procedure for studies which aim to substantiate their modeling and simulation methods using coarsely-sliced cardiac data. In addition, the paper outlines a suggested original procedure for deriving image-based ventricular models using the CardioViz3D software. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Certified reference materials (GBW09170 and 09171) of creatinine in human serum.

    PubMed

    Dai, Xinhua; Fang, Xiang; Shao, Mingwu; Li, Ming; Huang, Zejian; Li, Hongmei; Jiang, You; Song, Dewei; He, Yajuan

    2011-02-15

    Creatinine is the most widely used clinical marker for assessing renal function. Concentrations of creatinine in human serum need to be carefully checked in order to ensure accurate diagnosis of renal function. Therefore, development of certified reference materials (CRMs) of creatinine in serum is of increasing importance. In this study, two new CRMs (Nos. GBW09170 and 09171) for creatinine in human serum have been developed. They were prepared with mixtures of several dozens of healthy people's and kidney disease patient's serum, respectively. The certified values of 8.10, 34.1 mg/kg for these two CRMs have been assigned by liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) method which was validated by using standard reference material (SRM) of SRM909b (a reference material obtained from National Institute of Standards and Technology, NIST). The expanded uncertainties of certified values for low and high concentrations were estimated to be 1.2 and 1.1%, respectively. The certified values were further confirmed by an international intercomparison for the determination of creatinine in human serum (Consultative Committee for Amount of Substance, CCQM) of K80 (CCQM-K80). These new CRMs of creatinine in human serum pool are totally native without additional creatinine spiked for enrichment. These new CRMs are capable of validating routine clinical methods for ensuring accuracy, reliability and comparability of analytical results from different clinical laboratories. They can also be used for instrument validation, development of secondary reference materials, and evaluating the accuracy of high order clinical methods for the determination of creatinine in human serum. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens.

    PubMed

    Haab, Brian B; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L; Sen, Subrata; Hollingsworth, Michael A; Rinaudo, Jo Ann; Killary, Ann M; Brand, Randall E

    2015-01-01

    The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19-9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19-9 in early-stage pancreatic cancer and the control groups. CA 19-9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70-74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19-9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9-9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19-9 data presented here will be useful for benchmarking and for exploring relationships to CA 19-9.

  19. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens

    PubMed Central

    Haab, Brian B.; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L.; Sen, Subrata; Hollingsworth, Michael A.; Rinaudo, Jo Ann; Killary, Ann M.; Brand, Randall E.

    2015-01-01

    The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19–9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19–9 in early-stage pancreatic cancer and the control groups. CA 19–9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70–74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19–9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9–9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19–9 data presented here will be useful for benchmarking and for exploring relationships to CA 19–9. PMID:26431551

  20. Quantitative determination and classification of energy drinks using near-infrared spectroscopy.

    PubMed

    Rácz, Anita; Héberger, Károly; Fodor, Marietta

    2016-09-01

    Almost a hundred commercially available energy drink samples from Hungary, Slovakia, and Greece were collected for the quantitative determination of their caffeine and sugar content with FT-NIR spectroscopy and high-performance liquid chromatography (HPLC). Calibration models were built with partial least-squares regression (PLSR). An HPLC-UV method was used to measure the reference values for caffeine content, while sugar contents were measured with the Schoorl method. Both the nominal sugar content (as indicated on the cans) and the measured sugar concentration were used as references. Although the Schoorl method has larger error and bias, appropriate models could be developed using both references. The validation of the models was based on sevenfold cross-validation and external validation. FT-NIR analysis is a good candidate to replace the HPLC-UV method, because it is much cheaper than any chromatographic method, while it is also more time-efficient. The combination of FT-NIR with multidimensional chemometric techniques like PLSR can be a good option for the detection of low caffeine concentrations in energy drinks. Moreover, three types of energy drinks that contain (i) taurine, (ii) arginine, and (iii) none of these two components were classified correctly using principal component analysis and linear discriminant analysis. Such classifications are important for the detection of adulterated samples and for quality control, as well. In this case, more than a hundred samples were used for the evaluation. The classification was validated with cross-validation and several randomization tests (X-scrambling). Graphical Abstract The way of energy drinks from cans to appropriate chemometric models.

  1. Sensitivity of regression calibration to non-perfect validation data with application to the Norwegian Women and Cancer Study.

    PubMed

    Buonaccorsi, John P; Dalen, Ingvild; Laake, Petter; Hjartåker, Anette; Engeset, Dagrun; Thoresen, Magne

    2015-04-15

    Measurement error occurs when we observe error-prone surrogates, rather than true values. It is common in observational studies and especially so in epidemiology, in nutritional epidemiology in particular. Correcting for measurement error has become common, and regression calibration is the most popular way to account for measurement error in continuous covariates. We consider its use in the context where there are validation data, which are used to calibrate the true values given the observed covariates. We allow for the case that the true value itself may not be observed in the validation data, but instead, a so-called reference measure is observed. The regression calibration method relies on certain assumptions.This paper examines possible biases in regression calibration estimators when some of these assumptions are violated. More specifically, we allow for the fact that (i) the reference measure may not necessarily be an 'alloyed gold standard' (i.e., unbiased) for the true value; (ii) there may be correlated random subject effects contributing to the surrogate and reference measures in the validation data; and (iii) the calibration model itself may not be the same in the validation study as in the main study; that is, it is not transportable. We expand on previous work to provide a general result, which characterizes potential bias in the regression calibration estimators as a result of any combination of the violations aforementioned. We then illustrate some of the general results with data from the Norwegian Women and Cancer Study. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale

    PubMed Central

    Steele, Catriona M.; Namasivayam-MacDonald, Ashwini M.; Guida, Brittany T.; Cichero, Julie A.; Duivestein, Janice; MRSc; Hanson, Ben; Lam, Peter; Riquelme, Luis F.

    2018-01-01

    Objective To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Design Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Setting Web-based survey. Participants Respondents (NZ170) from 29 countries. Interventions Not applicable. Main Outcome Measures Consensual validity (percent agreement and Kendall t), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). Results The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. Conclusions This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. PMID:29428348

  3. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale.

    PubMed

    Steele, Catriona M; Namasivayam-MacDonald, Ashwini M; Guida, Brittany T; Cichero, Julie A; Duivestein, Janice; Hanson, Ben; Lam, Peter; Riquelme, Luis F

    2018-05-01

    To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Web-based survey. Respondents (N=170) from 29 countries. Not applicable. Consensual validity (percent agreement and Kendall τ), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Internal validation of the GlobalFiler™ Express PCR Amplification Kit for the direct amplification of reference DNA samples on a high-throughput automated workflow.

    PubMed

    Flores, Shahida; Sun, Jie; King, Jonathan; Budowle, Bruce

    2014-05-01

    The GlobalFiler™ Express PCR Amplification Kit uses 6-dye fluorescent chemistry to enable multiplexing of 21 autosomal STRs, 1 Y-STR, 1 Y-indel and the sex-determining marker amelogenin. The kit is specifically designed for processing reference DNA samples in a high throughput manner. Validation studies were conducted to assess the performance and define the limitations of this direct amplification kit for typing blood and buccal reference DNA samples on various punchable collection media. Studies included thermal cycling sensitivity, reproducibility, precision, sensitivity of detection, minimum detection threshold, system contamination, stochastic threshold and concordance. Results showed that optimal amplification and injection parameters for a 1.2mm punch from blood and buccal samples were 27 and 28 cycles, respectively, combined with a 12s injection on an ABI 3500xL Genetic Analyzer. Minimum detection thresholds were set at 100 and 120RFUs for 27 and 28 cycles, respectively, and it was suggested that data from positive amplification controls provided a better threshold representation. Stochastic thresholds were set at 250 and 400RFUs for 27 and 28 cycles, respectively, as stochastic effects increased with cycle number. The minimum amount of input DNA resulting in a full profile was 0.5ng, however, the optimum range determined was 2.5-10ng. Profile quality from the GlobalFiler™ Express Kit and the previously validated AmpFlSTR(®) Identifiler(®) Direct Kit was comparable. The validation data support that reliable DNA typing results from reference DNA samples can be obtained using the GlobalFiler™ Express PCR Amplification Kit. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Development and Validation of a Computational Model for Androgen Receptor Activity

    PubMed Central

    2016-01-01

    Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can more rapidly and inexpensively identify potential androgen-active chemicals. We integrated 11 HTS ToxCast/Tox21 in vitro assays into a computational network model to distinguish true AR pathway activity from technology-specific assay interference. The in vitro HTS assays probed perturbations of the AR pathway at multiple points (receptor binding, coregulator recruitment, gene transcription, and protein production) and multiple cell types. Confirmatory in vitro antagonist assay data and cytotoxicity information were used as additional flags for potential nonspecific activity. Validating such alternative testing strategies requires high-quality reference data. We compiled 158 putative androgen-active and -inactive chemicals from a combination of international test method validation efforts and semiautomated systematic literature reviews. Detailed in vitro assay information and results were compiled into a single database using a standardized ontology. Reference chemical concentrations that activated or inhibited AR pathway activity were identified to establish a range of potencies with reproducible reference chemical results. Comparison with existing Tier 1 AR binding data from the U.S. EPA Endocrine Disruptor Screening Program revealed that the model identified binders at relevant test concentrations (<100 μM) and was more sensitive to antagonist activity. The AR pathway model based on the ToxCast/Tox21 assays had balanced accuracies of 95.2% for agonist (n = 29) and 97.5% for antagonist (n = 28) reference chemicals. Out of 1855 chemicals screened in the AR pathway model, 220 chemicals demonstrated AR agonist or antagonist activity and an additional 174 chemicals were predicted to have potential weak AR pathway activity. PMID:27933809

  6. A STUDY ON A COOPERATIVE RELATIONSHIP TO THE IMPROVEMENT OF THE REGIONAL FIRE FIGHTING VALIDITY -Case Study in Bangkok, Thailand-

    NASA Astrophysics Data System (ADS)

    Sripramai, Keerati; Oikawa, Yasushi; Watanabe, Hiroshi; Katada, Toshitaka

    Generally, in order to improve some regional fire fighting validity, indispensable strategies are not only a reinforcement of the governmental fire fighting ability, but also a strengthening of the cooperative relationship between governmental and non-governmental fire fighting ability. However, for practical purposes, the effective strategy should be different depending on the actual situationin the subject area. So, in this study, we grasp the actual state and background of the problems that need to be solved for the improvement of the regional fire fighting validity in Bangkok as a case study, and examine the appropriate solution focusing on the relationship between official and voluntary fire fighting. Through some practicable activities such as interviews, investigati ons, and making the regional fire fighting validity map, it became clear that the problems of uncooperative relationship and the lack of trust between stakeholders should be solved first and foremost.

  7. Interlaboratory Reproducibility of Droplet Digital Polymerase Chain Reaction Using a New DNA Reference Material Format.

    PubMed

    Pinheiro, Leonardo B; O'Brien, Helen; Druce, Julian; Do, Hongdo; Kay, Pippa; Daniels, Marissa; You, Jingjing; Burke, Daniel; Griffiths, Kate; Emslie, Kerry R

    2017-11-07

    Use of droplet digital PCR technology (ddPCR) is expanding rapidly in the diversity of applications and number of users around the world. Access to relatively simple and affordable commercial ddPCR technology has attracted wide interest in use of this technology as a molecular diagnostic tool. For ddPCR to effectively transition to a molecular diagnostic setting requires processes for method validation and verification and demonstration of reproducible instrument performance. In this study, we describe the development and characterization of a DNA reference material (NMI NA008 High GC reference material) comprising a challenging methylated GC-rich DNA template under a novel 96-well microplate format. A scalable process using high precision acoustic dispensing technology was validated to produce the DNA reference material with a certified reference value expressed in amount of DNA molecules per well. An interlaboratory study, conducted using blinded NA008 High GC reference material to assess reproducibility among seven independent laboratories demonstrated less than 4.5% reproducibility relative standard deviation. With the exclusion of one laboratory, laboratories had appropriate technical competency, fully functional instrumentation, and suitable reagents to perform accurate ddPCR based DNA quantification measurements at the time of the study. The study results confirmed that NA008 High GC reference material is fit for the purpose of being used for quality control of ddPCR systems, consumables, instrumentation, and workflow.

  8. Identification of candidate reference chemicals for in vitro steroidogenesis assays.

    PubMed

    Pinto, Caroline Lucia; Markey, Kristan; Dix, David; Browne, Patience

    2018-03-01

    The Endocrine Disruptor Screening Program (EDSP) is transitioning from traditional testing methods to integrating ToxCast/Tox21 in vitro high-throughput screening assays for identifying chemicals with endocrine bioactivity. The ToxCast high-throughput H295R steroidogenesis assay may potentially replace the low-throughput assays currently used in the EDSP Tier 1 battery to detect chemicals that alter the synthesis of androgens and estrogens. Herein, we describe an approach for identifying in vitro candidate reference chemicals that affect the production of androgens and estrogens in models of steroidogenesis. Candidate reference chemicals were identified from a review of H295R and gonad-derived in vitro assays used in methods validation and published in the scientific literature. A total of 29 chemicals affecting androgen and estrogen levels satisfied all criteria for positive reference chemicals, while an additional set of 21 and 15 chemicals partially fulfilled criteria for positive reference chemicals for androgens and estrogens, respectively. The identified chemicals included pesticides, pharmaceuticals, industrial and naturally-occurring chemicals with the capability to increase or decrease the levels of the sex hormones in vitro. Additionally, 14 and 15 compounds were identified as potential negative reference chemicals for effects on androgens and estrogens, respectively. These candidate reference chemicals will be informative for performance-based validation of in vitro steroidogenesis models. Copyright © 2017. Published by Elsevier Ltd.

  9. Identification of TMEM208 and PQLC2 as reference genes for normalizing mRNA expression in colorectal cancer treated with aspirin

    PubMed Central

    Zhu, Yuanyuan; Yang, Chao; Weng, Mingjiao; Zhang, Yan; Yang, Chunhui; Jin, Yinji; Yang, Weiwei; He, Yan; Wu, Yiqi; Zhang, Yuhua; Wang, Guangyu; RajkumarEzakiel Redpath, Riju James; Zhang, Lei; Jin, Xiaoming; Liu, Ying; Sun, Yuchun; Ning, Ning; Qiao, Yu; Zhang, Fengmin; Li, Zhiwei; Wang, Tianzhen; Zhang, Yanqiao; Li, Xiaobo

    2017-01-01

    Numerous evidences indicate that aspirin usage causes a significant reduction in colorectal cancer. However, the molecular mechanisms about aspirin preventing colon cancer are largely unknown. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is a most frequently used method to identify the target molecules regulated by certain compound. However, this method needs stable internal reference genes to analyze the expression change of the targets. In this study, the transcriptional stabilities of several traditional reference genes were evaluated in colon cancer cells treated with aspirin, and also, the suitable internal reference genes were screened by using a microarray and were further identified by using the geNorm and NormFinder softwares, and then were validated in more cell lines and xenografts. We have showed that three traditional internal reference genes, β-actin, GAPDH and α-tubulin, are not suitable for studying gene transcription in colon cancer cells treated with aspirin, and we have identified and validated TMEM208 and PQLC2 as the ideal internal reference genes for detecting the molecular targets of aspirin in colon cancer in vitro and in vivo. This study reveals stable internal reference genes for studying the target genes of aspirin in colon cancer, which will contribute to identify the molecular mechanism behind aspirin preventing colon cancer. PMID:28184026

  10. Identification of TMEM208 and PQLC2 as reference genes for normalizing mRNA expression in colorectal cancer treated with aspirin.

    PubMed

    Zhu, Yuanyuan; Yang, Chao; Weng, Mingjiao; Zhang, Yan; Yang, Chunhui; Jin, Yinji; Yang, Weiwei; He, Yan; Wu, Yiqi; Zhang, Yuhua; Wang, Guangyu; RajkumarEzakiel Redpath, Riju James; Zhang, Lei; Jin, Xiaoming; Liu, Ying; Sun, Yuchun; Ning, Ning; Qiao, Yu; Zhang, Fengmin; Li, Zhiwei; Wang, Tianzhen; Zhang, Yanqiao; Li, Xiaobo

    2017-04-04

    Numerous evidences indicate that aspirin usage causes a significant reduction in colorectal cancer. However, the molecular mechanisms about aspirin preventing colon cancer are largely unknown. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is a most frequently used method to identify the target molecules regulated by certain compound. However, this method needs stable internal reference genes to analyze the expression change of the targets. In this study, the transcriptional stabilities of several traditional reference genes were evaluated in colon cancer cells treated with aspirin, and also, the suitable internal reference genes were screened by using a microarray and were further identified by using the geNorm and NormFinder softwares, and then were validated in more cell lines and xenografts. We have showed that three traditional internal reference genes, β-actin, GAPDH and α-tubulin, are not suitable for studying gene transcription in colon cancer cells treated with aspirin, and we have identified and validated TMEM208 and PQLC2 as the ideal internal reference genes for detecting the molecular targets of aspirin in colon cancer in vitro and in vivo. This study reveals stable internal reference genes for studying the target genes of aspirin in colon cancer, which will contribute to identify the molecular mechanism behind aspirin preventing colon cancer.

  11. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  12. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  13. Health surveillance under adverse ergonomics conditions--validity of a screening method adapted for the occupational health service.

    PubMed

    Jonker, Dirk; Gustafsson, Ewa; Rolander, Bo; Arvidsson, Inger; Nordander, Catarina

    2015-01-01

    A new health surveillance protocol for work-related upper-extremity musculoskeletal disorders has been validated by comparing the results with a reference protocol. The studied protocol, Health Surveillance in Adverse Ergonomics Conditions (HECO), is a new version of the reference protocol modified for application in the Occupational Health Service (OHS). The HECO protocol contains both a screening part and a diagnosing part. Sixty-three employees were examined. The screening in HECO did not miss any diagnosis found when using the reference protocol, but in comparison to the reference protocol considerable time savings could be achieved. Fair to good agreement between the protocols was obtained for one or more diagnoses in neck/shoulders (86%, k = 0.62) and elbow/hands (84%, k = 0.49). Therefore, the results obtained using the HECO protocol can be compared with a reference material collected with the reference protocol, and thus provide information of the magnitude of disorders in an examined work group. Practitioner Summary: The HECO protocol is a relatively simple physical examination protocol for identification of musculoskeletal disorders in the neck and upper extremities. The protocol is a reliable and cost-effective tool for the OHS to use for occupational health surveillance in order to detect workplaces at high risk for developing musculoskeletal disorders.

  14. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  15. Reference genes for quantitative real-time PCR analysis in symbiont Entomomyces delphacidicola of Nilaparvata lugens (Stål)

    PubMed Central

    Wan, Pin-Jun; Tang, Yao-Hua; Yuan, San-Yue; He, Jia-Chun; Wang, Wei-Xia; Lai, Feng-Xiang; Fu, Qiang

    2017-01-01

    Nilaparvata lugens (Stål) (Hemiptera: Delphacidae) is a major rice pest that harbors an endosymbiont ascomycete fungus, Entomomyces delphacidicola str. NLU (also known as yeast-like symbiont, YLS). Driving by demand of novel population management tactics (e.g. RNAi), the importance of YLS has been studied and revealed, which greatly boosts the interest of molecular level studies related to YLS. The current study focuses on reference genes for RT-qPCR studies related to YLS. Eight previously unreported YLS genes were cloned, and their expressions were evaluated for N. lugens samples of different developmental stages and sexes, and under different nutritional conditions and temperatures. Expression stabilities were analyzed by BestKeeper, geNorm, NormFinder, ΔCt method and RefFinder. Furthermore, the selected reference genes for RT-qPCR of YLS genes were validated using targeted YLS genes that respond to different nutritional conditions (amino acid deprivation) and RNAi. The results suggest that ylsRPS15p/ylsACT are the most suitable reference genes for temporal gene expression profiling, while ylsTUB/ylsACT and ylsRPS15e/ylsGADPH are the most suitable reference gene choices for evaluating nutrition and temperature effects. Validation studies demonstrated the advantage of using endogenous YLS reference genes for YLS studies. PMID:28198810

  16. Pharmaceutical Regulation in Central and Eastern European Countries: A Current Review

    PubMed Central

    Kawalec, Paweł; Tesar, Tomas; Vostalova, Lenka; Draganic, Pero; Manova, Manoela; Savova, Alexandra; Petrova, Guenka; Rugaja, Zinta; Männik, Agnes; Sowada, Christoph; Stawowczyk, Ewa; Harsanyi, Andras; Inotai, Andras; Turcu-Stiolica, Adina; Gulbinovič, Jolanta; Pilc, Andrzej

    2017-01-01

    Objectives: The aim of this study was to review reimbursement environment as well as pricing and reimbursement requirements for drugs in selected Central and Eastern Europe (CEE) countries. Methods: A questionnaire-based survey was performed in the period from November 2016 to March 2017 among experts involved in reimbursement matters from CEE countries: Bulgaria, Croatia, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Slovakia, and Romania. A review of requirements for reimbursement and implications of Health Technology Assessment (HTA) was performed to compare the issues in above-mentioned countries. For each specified country, data for reimbursement costs, total pharmaceutical budget, and total public health care budget in the years 2014 and 2015 were also collected. Questionnaires were distributed via emails and feedback data were obtained in the same way. Additional questions, if any, were also submitted to respondents by email. Pricing and reimbursement data were valid for March 2017. Results: The survey revealed that the relation of drug reimbursement costs to total public healthcare spending ranged from 0.12 to 0.21 in the year 2014 and 2015 (median value). It also revealed that pricing criteria for drugs, employed in the CEE countries, were quite similar. External reference pricing as well as internal reference pricing were common in mentioned countries. Positive reimbursement lists were valid in all countries of the CEE region, negative ones were rarely used; reimbursement decisions were regularly revised and updated in the majority of countries. Copayment was common and available levels of reimbursement differed within and between the countries and ranged from 20 to 100%. Risk-sharing schemes were often in use, especially in the case of innovative, expensive drugs. Generic substitution was also possible in all analyzed CEE countries, while some made it mandatory. HTA was carried out in almost all of the considered CEE countries and HTA dossier was obligatory for submitting a pricing and reimbursement application. Conclusions: Pricing and reimbursement requirements are quite similar in the CEE region although some differences were identified. HTA evaluations are commonly used in considered countries. PMID:29326583

  17. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    PubMed

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  18. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  19. Validity of histopathological grading of articular cartilage from osteoarthritic knee joints

    PubMed Central

    Ostergaard, K.; Andersen, C.; Petersen, J.; Bendtzen, K.; Salter, D.

    1999-01-01

    OBJECTIVES—To determine the validity of the histological-histochemical grading system (HHGS) for osteoarthritic (OA) articular cartilage.
METHODS—Human articular cartilage was obtained from macroscopically normal (n = 13) and OA (n = 21) knee joints. Sections of central and peripheral regions of normal samples were produced. Sections of regions containing severe, moderate, and mild OA changes were produced from each OA sample. A total of 89 sections were graded by means of the HHGS (0-14) twice by three observers.
RESULTS—Average scores for regions designated severe (8.64) and moderate (5.83) OA were less than the expected (10-14 and 6-9, respectively) according to the HHGS, whereas average scores for the region designated mild (5.29) OA and central and peripheral regions (2.19) of normal cartilage were higher than expected (2-5 and 0-1, respectively). The HHGS was capable of differentiating between articular cartilage from macroscopically normal and OA joints and between the region designated severe OA and other regions. However, the HHGS did not adequately differentiate between regions designated mild and moderate OA. Values for sensitivity, specificity, and efficiency for all regions varied considerably.
CONCLUSION—The HHGS is valid for normal and severe OA cartilage, but does not permit distinction between mild and moderate OA changes in articular cartilage.

 Keywords: histopathology; osteoarthritis; reliability; validity PMID:10364898

  20. Evaluation of Reference Genes for Quantitative Real-Time PCR in Songbirds

    PubMed Central

    Zinzow-Kramer, Wendy M.; Horton, Brent M.; Maney, Donna L.

    2014-01-01

    Quantitative real-time PCR (qPCR) is becoming a popular tool for the quantification of gene expression in the brain and endocrine tissues of songbirds. Accurate analysis of qPCR data relies on the selection of appropriate reference genes for normalization, yet few papers on songbirds contain evidence of reference gene validation. Here, we evaluated the expression of ten potential reference genes (18S, ACTB, GAPDH, HMBS, HPRT, PPIA, RPL4, RPL32, TFRC, and UBC) in brain, pituitary, ovary, and testis in two species of songbird: zebra finch and white-throated sparrow. We used two algorithms, geNorm and NormFinder, to assess the stability of these reference genes in our samples. We found that the suitability of some of the most popular reference genes for target gene normalization in mammals, such as 18S, depended highly on tissue type. Thus, they are not the best choices for brain and gonad in these songbirds. In contrast, we identified alternative genes, such as HPRT, RPL4 and PPIA, that were highly stable in brain, pituitary, and gonad in these species. Our results suggest that the validation of reference genes in mammals does not necessarily extrapolate to other taxonomic groups. For researchers wishing to identify and evaluate suitable reference genes for qPCR songbirds, our results should serve as a starting point and should help increase the power and utility of songbird models in behavioral neuroendocrinology. PMID:24780145

  1. Environmental Sustainability and Effects on Urban Micro Region using Agent-Based Modeling of Urbanisation in Select Major Indian Cities

    NASA Astrophysics Data System (ADS)

    Aithal, B. H.

    2015-12-01

    Abstract: Urbanisation has gained momentum with globalization in India. Policy decisions to set up commercial, industrial hubs have fuelled large scale migration, added with population upsurge has contributed to the fast growing urban region that needs to be monitored in order to design sustainable urban cities. Unplanned urbanization have resulted in the growth of peri-urban region referred to as urban sprawl, are often devoid of basic amenities and infrastructure leading to large scale environmental problems that are evident. Remote sensing data acquired through space borne sensors at regular interval helps in understanding urban dynamics aided by Geoinformatics which has proved very effective in mapping and monitoring for sustainable urban planning. Cellular automata (CA) is a robust approach for the spatially explicit simulation of land-use land cover dynamics. CA uses rules, states, conditions that are vital factors in modelling urbanisation. This communication effectively introduces simulation assistances of CA with the agent based modelling supported by its fuzzy characteristics and weightages through analytical hierarchal process (AHP). This has been done considering perceived agents such as industries, natural resource etc. Respective agent's role in development of a particular regions into an urban area has been examined with weights and its influence of each of these agents based on its characteristics functions. Validation was performed obtaining a high kappa coefficient indicating the quality and the allocation performance of the model & validity of the model to predict future projections. The prediction using the proposed model was performed for 2030. Further environmental sustainability of each of these cities are explored such as water features, environment, greenhouse gas emissions, effects on human human health etc., Modeling suggests trend of various land use classes transformation with the spurt in urban expansions based on specific regions and policies providing a visual spatial information to both urban planners and city managers. Further environmental sustainability assessment indicates dwindling natural resources and increase in thermal discomfort to the living population thereby indicating need for balanced and planned development.

  2. Turbulence detection using radiosondes: plugging the gaps in the observation of turbulence

    NASA Astrophysics Data System (ADS)

    Marlton, Graeme; Harrison, Giles; Williams, Paul; Nicoll, Keri

    2014-05-01

    Turbulence costs the airline industry tens of millions of dollars each year, through damage to aircraft and injury to passengers. Clear-air turbulence (CAT) is particularly problematic, as it cannot be detected using remote sensing methods and we lack consistent observations to validate forecast models. Here we describe two specially adapted meteorological radiosondes that are used to measure turbulence. The first sensor consists of a Hall-effect magnetometer, which uses the Earth's magnetic field as a reference point, allowing the motion of the sonde to be measured. The second consists of an accelerometer that measures the accelerations the balloon encounters. A solar radiation sensor is mounted at the top of the package, to determine whether the sonde is in cloud. Results from multiple flights over Reading, UK in different conditions, show both sensors detecting turbulent regions near jet boundaries and above cloud tops, with the accelerometer recording values in excess of 6g in these regions. Case studies will show how these observations can be used to test the performance of a selection of empirical turbulence diagnostics initialised from ERA-interim data.

  3. 3-D Quantitative Dynamic Contrast Ultrasound for Prostate Cancer Localization.

    PubMed

    Schalk, Stefan G; Huang, Jing; Li, Jia; Demi, Libertario; Wijkstra, Hessel; Huang, Pintong; Mischi, Massimo

    2018-04-01

    To investigate quantitative 3-D dynamic contrast-enhanced ultrasound (DCE-US) and, in particular 3-D contrast-ultrasound dispersion imaging (CUDI), for prostate cancer detection and localization, 43 patients referred for 10-12-core systematic biopsy underwent 3-D DCE-US. For each 3-D DCE-US recording, parametric maps of CUDI-based and perfusion-based parameters were computed. The parametric maps were divided in regions, each corresponding to a biopsy core. The obtained parameters were validated per biopsy location and after combining two or more adjacent regions. For CUDI by correlation (r) and for the wash-in time (WIT), a significant difference in parameter values between benign and malignant biopsy cores was found (p < 0.001). In a per-prostate analysis, sensitivity and specificity were 94% and 50% for r, and 53% and 81% for WIT. Based on these results, it can be concluded that quantitative 3-D DCE-US could aid in localizing prostate cancer. Therefore, we recommend follow-up studies to investigate its value for targeting biopsies. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  4. Forensics and mitochondrial DNA: applications, debates, and foundations.

    PubMed

    Budowle, Bruce; Allard, Marc W; Wilson, Mark R; Chakraborty, Ranajit

    2003-01-01

    Debate on the validity and reliability of scientific methods often arises in the courtroom. When the government (i.e., the prosecution) is the proponent of evidence, the defense is obliged to challenge its admissibility. Regardless, those who seek to use DNA typing methodologies to analyze forensic biological evidence have a responsibility to understand the technology and its applications so a proper foundation(s) for its use can be laid. Mitochondrial DNA (mtDNA), an extranuclear genome, has certain features that make it desirable for forensics, namely, high copy number, lack of recombination, and matrilineal inheritance. mtDNA typing has become routine in forensic biology and is used to analyze old bones, teeth, hair shafts, and other biological samples where nuclear DNA content is low. To evaluate results obtained by sequencing the two hypervariable regions of the control region of the human mtDNA genome, one must consider the genetically related issues of nomenclature, reference population databases, heteroplasmy, paternal leakage, recombination, and, of course, interpretation of results. We describe the approaches, the impact some issues may have on interpretation of mtDNA analyses, and some issues raised in the courtroom.

  5. SPH with dynamical smoothing length adjustment based on the local flow kinematics

    NASA Astrophysics Data System (ADS)

    Olejnik, Michał; Szewc, Kamil; Pozorski, Jacek

    2017-11-01

    Due to the Lagrangian nature of Smoothed Particle Hydrodynamics (SPH), the adaptive resolution remains a challenging task. In this work, we first analyse the influence of the simulation parameters and the smoothing length on solution accuracy, in particular in high strain regions. Based on this analysis we develop a novel approach to dynamically adjust the kernel range for each SPH particle separately, accounting for the local flow kinematics. We use the Okubo-Weiss parameter that distinguishes the strain and vorticity dominated regions in the flow domain. The proposed development is relatively simple and implies only a moderate computational overhead. We validate the modified SPH algorithm for a selection of two-dimensional test cases: the Taylor-Green flow, the vortex spin-down, the lid-driven cavity and the dam-break flow against a sharp-edged obstacle. The simulation results show good agreement with the reference data and improvement of the long-term accuracy for unsteady flows. For the lid-driven cavity case, the proposed dynamical adjustment remedies the problem of tensile instability (particle clustering).

  6. Ring trial 2016 for Bluetongue virus detection by real-time RT-PCR in France.

    PubMed

    Sailleau, Corinne; Viarouge, Cyril; Breard, Emmanuel; Vitour, Damien; Zientara, Stephan

    2017-05-01

    Since the unexpected emergence of BTV-8 in Northern Europe and the incursion of BTV-8 and 1 in France in 2006-2007, molecular diagnosis has considerably evolved. Several real-time RT-PCR (rtRT-PCR) methods have been developed and published, and are currently being used in many countries across Europe for BTV detection and typing. In France, the national reference laboratory (NRL) for orbiviruses develops and validates 'ready-to-use' kits with private companies for viral RNA detection. The regional laboratories network that was set up to deal with a heavy demand for analyses has used these available kits. From 2007, ring tests were organized to monitor the performance of the French laboratories. This study presents the results of 63 regional laboratories in the ring trial organized in 2016. Blood samples were sent to the laboratories. Participants were asked to use the rtRT-PCR methods in place in their laboratory, for detection of all BTV serotypes and specifically BTV-8. The French regional laboratories are able to detect and genotype BTV in affected animals. Despite the use of several methods (i.e. RNA extraction and different commercial rtRT-PCRs), the network is homogeneous. The ring trial demonstrated that the French regional veterinary laboratories have reliable and robust BTV diagnostic tools for BTV genome detection.

  7. Evaluation of the World Health Organization global measles and rubella quality assurance program, 2001-2008.

    PubMed

    Stambos, Vicki; Leydon, Jennie; Riddell, Michaela; Clothier, Hazel; Catton, Mike; Featherstone, David; Kelly, Heath

    2011-07-01

    During 2001-2008, the Victorian Infectious Diseases Reference Laboratory (VIDRL) prepared and provided a measles and rubella proficiency test panel for distribution to the World Health Organization (WHO) measles and rubella network laboratories as part of their annual laboratory accreditation assessment. Panel test results were forwarded to VIDRL, and results from 8 consecutive years were analyzed. We assessed the type of assays used and results achieved on the basis of the positive and negative interpretation of submitted results, by year and WHO region, for measles and rubella. Over time, there has been a noticeable increase in laboratory and WHO regional participation. For all panels, the proportion of laboratories in all WHO regions using the WHO-validated Dade Behring assay for measles and rubella-specific IgM antibodies ranged from 35% to 100% and 59% to 100%, respectively. For all regions and years, the proportion of laboratories obtaining a pass score ranged from 87% to 100% for measles and 93% to 100% for rubella. During 2001-2008, a large proportion of laboratories worldwide achieved and maintained a pass score for both measles and rubella. Measles and rubella proficiency testing is regarded as a major achievement for the WHO measles and rubella laboratory program. © The Author 2011. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved.

  8. Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques

    NASA Astrophysics Data System (ADS)

    Canbakan, Axel

    Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.

  9. 23 CFR Appendix A to Subpart A of... - Special Provisions

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... exclusive referral failed to refer minority employees.) In the event the union referral practice prevents... applicants may be referred to the contractor for employment consideration. In the event the contractor has a valid bargaining agreement providing for exclusive hiring hall referrals, he is expected to observe the...

  10. Validation studies of Karl Fisher reference method for moisture in cotton

    USDA-ARS?s Scientific Manuscript database

    With current international standard oven drying (SOD) techniques lacking precision and accuracy statements, a new standard reference method is needed. Volumetric Karl Fischer Titration (KFT) is a widely used measure of moisture content. The method is used in many ASTM methods, 14 NIST SRMs, and te...

  11. The western Mediterranean Sea: An area for a regional validation for TOPEX/Poseidon and a field for geophysical and oceanographic studies

    NASA Technical Reports Server (NTRS)

    Barlier, Francois; Balmino, G.; Boucher, Claude; Willis, P.; Biancale, R.; Menard, Yves; Vincent, P.; Bethoux, J. P.; Exertier, P.; Pierron, F.

    1991-01-01

    The research project has two kinds of objectives. The first is focused on the regional validation of the altimeter, orbit, and mean sea surface; it will be performed in close cooperation with the local validation performed at Lampedusa/Lampione (Italy). The second deals with the geophysical and oceanographic research of interest in this area.

  12. Initial validation of the Soil Moisture Active Passive mission using USDA-ARS watersheds

    USDA-ARS?s Scientific Manuscript database

    The Soil Moisture Active Passive (SMAP) Mission was launched in January 2015 to measure global surface soil moisture. The calibration and validation program of SMAP relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The U...

  13. Validation of the Child and Adolescent Social Perception Measure.

    ERIC Educational Resources Information Center

    Koning, Cyndie; Magill-Evans, Joyce

    2001-01-01

    Compared 32 adolescent boys who had social skills deficits consistent with Asperger's Disorder to 29 controls matched on age and intelligence quotient. Significant differences were found between groups on Child and Adolescent Social Perception Measure scores, and the validity of the instrument was supported. (Contains 37 references.) (JOW)

  14. Design, Development, and Validation of Learning Objects

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok

    2006-01-01

    A learning object is a small, stand-alone, mediated content resource that can be reused in multiple instructional contexts. In this article, we describe our approach to design, develop, and validate Shareable Content Object Reference Model (SCORM) compliant learning objects for undergraduate computer science education. We discuss the advantages of…

  15. An integrated pan-tropical biomass map using multiple reference datasets.

    PubMed

    Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon

    2016-04-01

    We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. © 2015 John Wiley & Sons Ltd.

  16. [The Amsterdam wrist rules: the multicenter prospective derivation and external validation of a clinical decision rule for the use of radiography in acute wrist trauma].

    PubMed

    Walenkamp, Monique M J; Bentohami, Abdelali; Slaar, Annelie; Beerekamp, M S H Suzan; Maas, Mario; Jager, L C Cara; Sosef, Nico L; van Velde, Romuald; Ultee, Jan M; Steyerberg, Ewout W; Goslings, J C Carel; Schep, Niels W L

    2016-01-01

    Although only 39% of patients with wrist trauma have sustained a fracture, the majority of patients is routinely referred for radiography. The purpose of this study was to derive and externally validate a clinical decision rule that selects patients with acute wrist trauma in the Emergency Department (ED) for radiography. This multicenter prospective study consisted of three components: (1) derivation of a clinical prediction model for detecting wrist fractures in patients following wrist trauma; (2) external validation of this model; and (3) design of a clinical decision rule. The study was conducted in the EDs of five Dutch hospitals: one academic hospital (derivation cohort) and four regional hospitals (external validation cohort). We included all adult patients with acute wrist trauma. The main outcome was fracture of the wrist (distal radius, distal ulna or carpal bones) diagnosed on conventional X-rays. A total of 882 patients were analyzed; 487 in the derivation cohort and 395 in the validation cohort. We derived a clinical prediction model with eight variables: age; sex, swelling of the wrist; swelling of the anatomical snuffbox, visible deformation; distal radius tender to palpation; pain on radial deviation and painful axial compression of the thumb. The Area Under the Curve at external validation of this model was 0.81 (95% CI: 0.77-0.85). The sensitivity and specificity of the Amsterdam Wrist Rules (AWR) in the external validation cohort were 98% (95% CI: 95-99%) and 21% (95% CI: 15%-28). The negative predictive value was 90% (95% CI: 81-99%). The Amsterdam Wrist Rules is a clinical prediction rule with a high sensitivity and negative predictive value for fractures of the wrist. Although external validation showed low specificity and 100 % sensitivity could not be achieved, the Amsterdam Wrist Rules can provide physicians in the Emergency Department with a useful screening tool to select patients with acute wrist trauma for radiography. The upcoming implementation study will further reveal the impact of the Amsterdam Wrist Rules on the anticipated reduction of X-rays requested, missed fractures, Emergency Department waiting times and health care costs.

  17. Validity and Reliability of a Glucometer Against Industry Reference Standards.

    PubMed

    Salacinski, Amanda J; Alford, Micah; Drevets, Kathryn; Hart, Sarah; Hunt, Brian E

    2014-01-01

    As an appealing alternative to reference glucose analyzers, portable glucometers are recommended for self-monitoring at home, in the field, and in research settings. The purpose was to characterize the accuracy and precision, and bias of glucometers in biomedical research. Fifteen young (20-36 years; mean = 24.5), moderately to highly active men (n = 10) and women (n = 5), defined by exercising 2 to 3 times a week for the past 6 months, were given an oral glucose tolerance test (OGTT) after an overnight fast. Participants ingested 50, 75, or 150 grams of glucose over a 5-minute period. The glucometer was compared to a reference instrument. The glucometer had 39% of values within 15% of measurements made using the reference instrument ranging from 45.05 to 169.37 mg/dl. There was both a proportional (-0.45 to -0.39) and small fixed (5.06 and 0.90 mg/dl) bias. Results of the present study suggest that the glucometer provided poor validity and reliability results compared to the results provided by the reference laboratory analyzer. The portable glucometers should be used for patient management, but not for diagnosis, treatment, or research purposes. © 2014 Diabetes Technology Society.

  18. Accurate determination of reference materials and natural isolates by means of quantitative (1)h NMR spectroscopy.

    PubMed

    Frank, Oliver; Kreissl, Johanna Karoline; Daschner, Andreas; Hofmann, Thomas

    2014-03-26

    A fast and precise proton nuclear magnetic resonance (qHNMR) method for the quantitative determination of low molecular weight target molecules in reference materials and natural isolates has been validated using ERETIC 2 (Electronic REference To access In vivo Concentrations) based on the PULCON (PULse length based CONcentration determination) methodology and compared to the gravimetric results. Using an Avance III NMR spectrometer (400 MHz) equipped with a broad band observe (BBO) probe, the qHNMR method was validated by determining its linearity, range, precision, and accuracy as well as robustness and limit of quantitation. The linearity of the method was assessed by measuring samples of l-tyrosine, caffeine, or benzoic acid in a concentration range between 0.3 and 16.5 mmol/L (r(2) ≥ 0.99), whereas the interday and intraday precisions were found to be ≤2%. The recovery of a range of reference compounds was ≥98.5%, thus demonstrating the qHNMR method as a precise tool for the rapid quantitation (~15 min) of food-related target compounds in reference materials and natural isolates such as nucleotides, polyphenols, or cyclic peptides.

  19. Validity of a small low-cost triaxial accelerometer with integrated logger for uncomplicated measurements of postures and movements of head, upper back and upper arms.

    PubMed

    Dahlqvist, Camilla; Hansson, Gert-Åke; Forsman, Mikael

    2016-07-01

    Repetitive work and work in constrained postures are risk factors for developing musculoskeletal disorders. Low-cost, user-friendly technical methods to quantify these risks are needed. The aims were to validate inclination angles and velocities of one model of the new generation of accelerometers with integrated data loggers against a previously validated one, and to compare meaurements when using a plain reference posture with that of a standardized one. All mean (n = 12 subjects) angular RMS-differences in 4 work tasks and 4 body parts were <2.5° and all mean median angular velocity differences <5.0 °/s. The mean correlation between the inclination signal-pairs was 0.996. This model of the new generation of triaxial accelerometers proved to be comparable to the validated accelerometer using a data logger. This makes it well-suited, for both researchers and practitioners, to measure postures and movements during work. Further work is needed for validation of the plain reference posture for upper arms. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. [Validity of AUDIT test for detection of disorders related with alcohol consumption in women].

    PubMed

    Pérula-de Torres, Luis Angel; Fernández-García, José Angel; Arias-Vega, Raquel; Muriel-Palomino, María; Márquez-Rebollo, Encarnación; Ruiz-Moral, Roger

    2005-11-26

    Early detection of patients with alcohol problems is important in clinical practice. The AUDIT (Alcohol Use Disorders Identification Test) questionnaire is a valid tool for this aim, especially in the male population. The objective of this study was to validate how useful is this questionnaire in females patients and to assess their test cut-off point for the diagnosis of alcohol problems in women. 414 woman were recruited in 2 health center and specialized center for addiction treatment. The AUDIT test and a semistructured interview (SCAN as gold standard) were performed to all patients. Internal consistency and criteria validity was assessed. Cronbach alpha was 0.93 (95% confidence interval [CI], 0.921-0.941). When the DSM-IV was taken as reference the most useful cut-off point was 6 points, with 89.6% (95% CI, 76.11-96.02) sensitivity and 95.07% (95% CI, 92.18-96.97) specificity. When CIE-10 was taken as reference the sensitivity was 89.58% (95% CI, 76.56-96.10) and the specificity was 95.33% (95% CI, 92.48-97.17). AUDIT is a questionnaire with good psychometrics properties and is valid for detecting dependence and risk alcohol consumption in women.

Top