Sample records for study setting sample

  1. Colon Reference Set Application: Mary Disis - University of Washington (2008) — EDRN Public Portal

    Cancer.gov

    The proposed study aims to validate the diagnostic value of a panel of serum antibodies for the early detection of colorectal cancer (CRC). We have developed a serum antibody based assay that shows promise in discriminating sera from CRC patients from healthy donors. We have evaluated two separate sample sets of sera that were available either commercially or were comprised of left over samples from previous studies by our group. Both sample sets showed concordance in discriminatory power. We have not been able to identify investigators with a larger, well defined sample set of early stage colon cancer sera and request assistance from the EDRN in obtaining such samples to help assess the potential diagnostic value of our autoantibody panel

  2. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  3. Detection and Genotyping of Human Papillomavirus in Self-Obtained Cervicovaginal Samples by Using the FTA Cartridge: New Possibilities for Cervical Cancer Screening ▿

    PubMed Central

    Lenselink, Charlotte H.; de Bie, Roosmarie P.; van Hamont, Dennis; Bakkers, Judith M. J. E.; Quint, Wim G. V.; Massuger, Leon F. A. G.; Bekkers, Ruud L. M.; Melchers, Willem J. G.

    2009-01-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF10-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening. PMID:19553570

  4. Detection and genotyping of human papillomavirus in self-obtained cervicovaginal samples by using the FTA cartridge: new possibilities for cervical cancer screening.

    PubMed

    Lenselink, Charlotte H; de Bie, Roosmarie P; van Hamont, Dennis; Bakkers, Judith M J E; Quint, Wim G V; Massuger, Leon F A G; Bekkers, Ruud L M; Melchers, Willem J G

    2009-08-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF(10)-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening.

  5. Lunar and Meteorite Thin Sections for Undergraduate and Graduate Studies

    NASA Astrophysics Data System (ADS)

    Allen, J.; Allen, C.

    2012-12-01

    The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Studies of rock and soil samples from the Moon and meteorites continue to yield useful information about the early history of the Moon, the Earth, and the inner solar system. Petrographic Thin Section Packages containing polished thin sections of samples from either the Lunar or Meteorite collections have been prepared. Each set of twelve sections of Apollo lunar samples or twelve sections of meteorites is available for loan from JSC. The thin sections sets are designed for use in domestic college and university courses in petrology. The loan period is very strict and limited to two weeks. Contact Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov Each set of slides is accompanied by teaching materials and a sample disk of representative lunar or meteorite samples. It is important to note that the samples in these sets are not exactly the same as the ones listed here. This list represents one set of samples. A key education resource available on the Curation website is Antarctic Meteorite Teaching Collection: Educational Meteorite Thin Sections, originally compiled by Bevan French, Glenn McPherson, and Roy Clarke and revised by Kevin Righter in 2010. Curation Websites College and university staff and students are encouraged to access the Lunar Petrographic Thin Section Set Publication and the Meteorite Petrographic Thin Section Package Resource which feature many thin section images and detailed descriptions of the samples, research results. http://curator.jsc.nasa.gov/Education/index.cfm Request research samples: http://curator.jsc.nasa.gov/ JSC-CURATION-EDUCATION-DISKS@mail.nasa.govLunar Thin Sections; Meteorite Thin Sections;

  6. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    EPA Science Inventory

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  7. NHEXAS PHASE I MARYLAND STUDY--LIPIDS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Lipids in Blood data set presents concentrations of cholesterol and total triglycerides in blood serum. The data set presents measurements for up to 2 lipids in 358 blood samples over 79 households. Each sample was collected via a venous sample from the primary respondent w...

  8. Downslope coarsening in aeolian grainflows of the Navajo Sandstone

    NASA Astrophysics Data System (ADS)

    Loope, David B.; Elder, James F.; Sweeney, Mark R.

    2012-07-01

    Downslope coarsening in grainflows has been observed on present-day dunes and generated in labs, but few previous studies have examined vertical sorting in ancient aeolian grainflows. We studied the grainflow strata of the Jurassic Navajo Sandstone in the southern Utah portion of its outcrop belt from Zion National Park (west) to Coyote Buttes and The Dive (east). At each study site, thick sets of grainflow-dominated cross-strata that were deposited by large transverse dunes comprise the bulk of the Navajo Sandstone. We studied three stratigraphic columns, one per site, composed almost exclusively of aeolian cross-strata. For each column, samples were obtained from one grainflow stratum in each consecutive set of the column, for a total of 139 samples from thirty-two sets of cross-strata. To investigate grading perpendicular to bedding within individual grainflows, we collected fourteen samples from four superimposed grainflow strata at The Dive. Samples were analyzed with a Malvern Mastersizer 2000 laser diffraction particle analyser. The median grain size of grainflow samples ranges from fine sand (164 μm) to coarse sand (617 μm). Using Folk and Ward criteria, samples are well-sorted to moderately-well-sorted. All but one of the twenty-eight sets showed at least slight downslope coarsening, but in general, downslope coarsening was not as well-developed or as consistent as that reported in laboratory subaqueous grainflows. Because coarse sand should be quickly sequestered within preserved cross-strata when bedforms climb, grain-size studies may help to test hypotheses for the stacking of sets of cross-strata.

  9. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    PubMed

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  10. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    PubMed

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  11. A two-stage cluster sampling method using gridded population data, a GIS, and Google Earth(TM) imagery in a population-based mortality survey in Iraq.

    PubMed

    Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K

    2012-04-27

    Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  12. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    PubMed Central

    2012-01-01

    Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266

  13. Liver Rapid Reference Set Application: Hiro Yamada - Wako (2011) — EDRN Public Portal

    Cancer.gov

    Measure clinical effectiveness of AFP-L3 and DCP for early detection of HCC in patient samples collected prospectively during surveillance. However since such samples are not readily available in the USA the reference set samples are well characterized and studied, gaining access to these samples will allow Wako to quickly measure clinical effectiveness of AFP-L3 and DCP in detecting early HCC.

  14. [Study on infrared spectrum change of Ganoderma lucidum and its extracts].

    PubMed

    Chen, Zao-Xin; Xu, Yong-Qun; Chen, Xiao-Kang; Huang, Dong-Lan; Lu, Wen-Guan

    2013-05-01

    From the determination of the infrared spectra of four substances (original ganoderma lucidum and ganoderma lucidum water extract, 95% ethanol extract and petroleum ether extract), it was found that the infrared spectrum can carry systematic chemical information and basically reflects the distribution of each component of the analyte. Ganoderma lucidum and its extracts can be distinguished according to the absorption peak area ratio of 3 416-3 279, 1 541 and 723 cm(-1) to 2 935-2 852 cm(-1). A method of calculating the information entropy of the sample set with Euclidean distance was proposed, the relationship between the information entropy and the amount of chemical information carried by the sample set was discussed, and the authors come to a conclusion that sample set of original ganoderma lucidum carry the most abundant chemical information. The infrared spectrum set of original ganoderma lucidum has better clustering effect on ganoderma atrum, Cyan ganoderma, ganoderma multiplicatum and ganoderma lucidum when making hierarchical cluster analysis of 4 sample set. The results show that infrared spectrum carries the chemical information of the material structure and closely relates to the chemical composition of the system. The higher the value of information entropy, the much richer the chemical information and the more the benefit for pattern recognition. This study has a guidance function to the construction of the sample set in pattern recognition.

  15. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  16. Screening experiments of ecstasy street samples using near infrared spectroscopy.

    PubMed

    Sondermann, N; Kovar, K A

    1999-12-20

    Twelve different sets of confiscated ecstasy samples were analysed applying both near infrared spectroscopy in reflectance mode (1100-2500 nm) and high-performance liquid chromatography (HPLC). The sets showed a large variance in composition. A calibration data set was generated based on the theory of factorial designs. It contained 221 N-methyl-3,4-methylenedioxyamphetamine (MDMA) samples, 167 N-ethyl-3,4-methylenedioxyamphetamine (MDE), 111 amphetamine and 106 samples without a controlled substance, which will be called placebo samples thereafter. From this data set, PLS-1 models were calculated and were successfully applied for validation of various external laboratory test sets. The transferability of these results to confiscated tablets is demonstrated here. It is shown that differentiation into placebo, amphetamine and ecstasy samples is possible. Analysis of intact tablets is practicable. However, more reliable results are obtained from pulverised samples. This is due to ill-defined production procedures. The use of mathematically pretreated spectra improves the prediction quality of all the PLS-1 models studied. It is possible to improve discrimination between MDE and MDMA with the help of a second model based on raw spectra. Alternative strategies are briefly discussed.

  17. Data on dissolved pesticides and volatile organic compounds in surface and ground waters in the San Joaquin-Tulare basins, California, water years 1992-1995

    USGS Publications Warehouse

    Kinsey, Willie B.; Johnson, Mark V.; Gronberg, JoAnn M.

    2005-01-01

    This report contains pesticide, volatile organic compound, major ion, nutrient, tritium, stable isotope, organic carbon, and trace-metal data collected from 149 ground-water wells, and pesticide data collected from 39 surface-water stream sites in the San Joaquin Valley of California. Included with the ground-water data are field measurements of pH, specific conductance, alkalinity, temperature, and dissolved oxygen. This report describes data collection procedures, analytical methods, quality assurance, and quality controls used by the National Water-Quality Assessment Program to ensure data reliability. Data contained in this report were collected during a four year period by the San Joaquin?Tulare Basins Study Unit of the United States Geological Survey's National Water-Quality Assessment Program. Surface-water-quality data collection began in April 1992, with sampling done three times a week at three sites as part of a pilot study conducted to provide background information for the surface-water-study design. Monthly samples were collected at 10 sites for major ions and nutrients from January 1993 to March 1995. Additional samples were collected at four of these sites, from January to December 1993, to study spatial and temporal variability in dissolved pesticide concentrations. Samples for several synoptic studies were collected from 1993 to 1995. Ground-water-quality data collection was restricted to the eastern alluvial fans subarea of the San Joaquin Valley. Data collection began in 1993 with the sampling of 21 wells in vineyard land-use settings. In 1994, 29 wells were sampled in almond land-use settings and 9 in vineyard land-use settings; an additional 11 wells were sampled along a flow path in the eastern Fresno County vineyard land-use area. Among the 79 wells sampled in 1995, 30 wells were in the corn, alfalfa, and vegetable land-use setting, and 1 well was in the vineyard land-use setting; an additional 20 were flow-path wells. Also sampled in 1995 were 28 wells used for a regional assessment of ground-water quality in the eastern San Joaquin Valley.

  18. Population-Based Preference Weights for the EQ-5D Health States Using the Visual Analogue Scale (VAS) in Iran.

    PubMed

    Goudarzi, Reza; Zeraati, Hojjat; Akbari Sari, Ali; Rashidian, Arash; Mohammad, Kazem

    2016-02-01

    Health-related quality of life (HRQoL) is used as a measure to valuate healthcare interventions and guide policy making. The EuroQol EQ-5D is a widely used generic preference-based instrument to measure Health-related quality of life. The objective of this study was to develop a value set of the EQ-5D health states for an Iranian population. This study is a cross-sectional study of Iranian populations. Our sample from Iranian populations consists out of 869 participants, who were selected for this study using a stratified probability sampling method. The sample was taken from individuals living in the city of Tehran and was stratified by age and gender from July to November 2013. Respondents valued 13 health states using the visual analogue scale (VAS) of the EQ-5D. Several fixed effects regression models were tested to predict the full set of health states. We selected the final model based on the logical consistency of the estimates, the sign and magnitude of the regression coefficients, goodness of fit, and parsimony. We also compared predicted values with a value set from similar studies in the UK and other countries. Our results show that the HRQoL does not vary among socioeconomic groups. Models at the individual level resulted in an additive model with all coefficients being statistically significant, R(2) = 0.55, a value of 0.75 for the best health state (11112), and a value of -0.074 for the worst health state (33333). The value set obtained for the study sample remarkably differs from those elicited in developed countries. This study is the first estimate for the EQ-5D value set based on the VAS in Iran. Given the importance of locally adapted value set the use of this value set can be recommended for future studies in Iran and In the EMRO regions.

  19. Mining pathway associations for disease-related pathway activity analysis based on gene expression and methylation data.

    PubMed

    Lee, Hyeonjeong; Shin, Miyoung

    2017-01-01

    The problem of discovering genetic markers as disease signatures is of great significance for the successful diagnosis, treatment, and prognosis of complex diseases. Even if many earlier studies worked on identifying disease markers from a variety of biological resources, they mostly focused on the markers of genes or gene-sets (i.e., pathways). However, these markers may not be enough to explain biological interactions between genetic variables that are related to diseases. Thus, in this study, our aim is to investigate distinctive associations among active pathways (i.e., pathway-sets) shown each in case and control samples which can be observed from gene expression and/or methylation data. The pathway-sets are obtained by identifying a set of associated pathways that are often active together over a significant number of class samples. For this purpose, gene expression or methylation profiles are first analyzed to identify significant (active) pathways via gene-set enrichment analysis. Then, regarding these active pathways, an association rule mining approach is applied to examine interesting pathway-sets in each class of samples (case or control). By doing so, the sets of associated pathways often working together in activity profiles are finally chosen as our distinctive signature of each class. The identified pathway-sets are aggregated into a pathway activity network (PAN), which facilitates the visualization of differential pathway associations between case and control samples. From our experiments with two publicly available datasets, we could find interesting PAN structures as the distinctive signatures of breast cancer and uterine leiomyoma cancer, respectively. Our pathway-set markers were shown to be superior or very comparable to other genetic markers (such as genes or gene-sets) in disease classification. Furthermore, the PAN structure, which can be constructed from the identified markers of pathway-sets, could provide deeper insights into distinctive associations between pathway activities in case and control samples.

  20. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  1. Occurrence of nitrate and pesticides in ground water beneath three agricultural land-use settings in the eastern San Joaquin Valley, California, 1993-1995

    USGS Publications Warehouse

    Burow, Karen R.; Shelton, Jennifer L.; Dubrovsky, Neil M.

    1998-01-01

    The processes that affect nitrate and pesticide occurrence may be better understood by relating ground-water quality to natural and human factors in the context of distinct, regionally extensive, land- use settings. This study assesses nitrate and pesticide occurrence in ground water beneath three agricultural land-use settings in the eastern San Joaquin Valley, California. Water samples were collected from 60 domestic wells in vineyard, almond, and a crop grouping of corn, alfalfa, and vegetable land-use settings. Each well was sampled once during 1993?1995. This study is one element of the U.S. Geological Survey?s National Water-Quality Assessment Program, which is designed to assess the status of, and trends in, the quality of the nation?s ground- and surface-water resources and to link the status and trends with an understanding of the natural and human factors that affect the quality of water. The concentrations and occurrence of nitrate and pesticides in ground-water samples from domestic wells in the eastern alluvial fan physiographic region were related to differences in chemical applica- tions and to the physical and biogeochemical processes that charac- terize each of the three land-use settings. Ground water beneath the vineyard and almond land-use settings on the coarse-grained, upper and middle parts of the alluvial fans is more vulnerable to nonpoint- source agricultural contamination than is the ground water beneath the corn, alfalfa, and vegetable land-use setting on the lower part of the fans, near the basin physiographic region. Nitrate concentrations ranged from less than 0.05 to 55 milligrams per liter, as nitrogen. Nitrate concentrations were significantly higher in the almond land-use setting than in the vineyard land-use setting, whereas concentrations in the corn, alfalfa, and vegetable land-use setting were intermediate. Nitrate concentrations exceeded the maximum contaminant level in eight samples from the almond land- use setting (40 percent), in seven samples from the corn, alfalfa, and vegetable land-use setting (35 percent), and in three samples from the vineyard land-use setting (15 percent). The physical and chemical characteristics of the vineyard and the almond land-use settings are similar, characterized by coarse-grained sediments and high dissolved- oxygen concentrations, reflecting processes that promote rapid infiltration of water and solutes. The high nitrate concentrations in the almond land-use setting reflect the high amount of nitrogen appli- cations in this setting, whereas the low nitrate concentrations in the vineyard land-use setting reflect relatively low nitrogen applications. In the corn, alfalfa, and vegetable land-use setting, the relatively fine-grained sediments, and low dissolved-oxygen concentrations, reflect processes that result in slow infiltration rates and longer ground-water residence times. The intermediate nitrate concentrations in the corn, alfalfa, and vegetable land-use setting are a result of these physical and chemical characteristics, combined with generally high (but variable) nitrogen applications. Twenty-three different pesticides were detected in 41 of 60 ground- water samples (68 percent). Eighty percent of the ground-water samples from the vineyard land-use setting had at least one pesticide detection, followed by 70 percent in the almond land-use setting, and 55 percent in the corn, alfalfa, and vegetable land-use setting. All concentra- tions were less than state or federal maximum contaminant levels only 5 of the detected pesticides have established maximum contaminant levels) with the exception of 1,2-dibromo-3-chloropropane, which exceeded the maximum contaminant level of 0.2 micrograms per liter in 10 ground-water samples from vineyard land-use wells and in 5 ground- water samples from almond land-use wells. Simazine was detected most often, occurring in 50 percent of the ground-water samples from the vineyard land-use wells and in 30 percent

  2. Training set optimization under population structure in genomic selection

    USDA-ARS?s Scientific Manuscript database

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  3. SELDI-TOF-MS proteomic profiling of serum, urine, and amniotic fluid in neural tube defects.

    PubMed

    Liu, Zhenjiang; Yuan, Zhengwei; Zhao, Qun

    2014-01-01

    Neural tube defects (NTDs) are common birth defects, whose specific biomarkers are needed. The purpose of this pilot study is to determine whether protein profiling in NTD-mothers differ from normal controls using SELDI-TOF-MS. ProteinChip Biomarker System was used to evaluate 82 maternal serum samples, 78 urine samples and 76 amniotic fluid samples. The validity of classification tree was then challenged with a blind test set including another 20 NTD-mothers and 18 controls in serum samples, and another 19 NTD-mothers and 17 controls in urine samples, and another 20 NTD-mothers and 17 controls in amniotic fluid samples. Eight proteins detected in serum samples were up-regulated and four proteins were down-regulated in the NTD group. Four proteins detected in urine samples were up-regulated and one protein was down-regulated in the NTD group. Six proteins detected in amniotic fluid samples were up-regulated and one protein was down-regulated in the NTD group. The classification tree for serum samples separated NTDs from healthy individuals, achieving a sensitivity of 91% and a specificity of 97% in the training set, and achieving a sensitivity of 90% and a specificity of 97% and a positive predictive value of 95% in the test set. The classification tree for urine samples separated NTDs from controls, achieving a sensitivity of 95% and a specificity of 94% in the training set, and achieving a sensitivity of 89% and a specificity of 82% and a positive predictive value of 85% in the test set. The classification tree for amniotic fluid samples separated NTDs from controls, achieving a sensitivity of 93% and a specificity of 89% in the training set, and achieving a sensitivity of 90% and a specificity of 88% and a positive predictive value of 90% in the test set. These suggest that SELDI-TOF-MS is an additional method for NTDs pregnancies detection.

  4. NHEXAS PHASE I ARIZONA STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 165 blood samples over 165 households. Each sample was collected as a venous sample from the primary respondent within each household during Stage III of the NHEXAS study. The samples...

  5. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    PubMed Central

    2010-01-01

    Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public. PMID:21190545

  6. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    PubMed

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  7. The IRHUM (Isotopic Reconstruction of Human Migration) database - bioavailable strontium isotope ratios for geochemical fingerprinting in France

    NASA Astrophysics Data System (ADS)

    Willmes, M.; McMorrow, L.; Kinsley, L.; Armstrong, R.; Aubert, M.; Eggins, S.; Falguères, C.; Maureille, B.; Moffat, I.; Grün, R.

    2014-03-01

    Strontium isotope ratios (87Sr / 86Sr) are a key geochemical tracer used in a wide range of fields including archaeology, ecology, food and forensic sciences. These applications are based on the principle that the Sr isotopic ratios of natural materials reflect the sources of strontium available during their formation. A major constraint for current studies is the lack of robust reference maps to evaluate the source of strontium isotope ratios measured in the samples. Here we provide a new data set of bioavailable Sr isotope ratios for the major geologic units of France, based on plant and soil samples (Pangaea data repository doi:10.1594/PANGAEA.819142). The IRHUM (Isotopic Reconstruction of Human Migration) database is a web platform to access, explore and map our data set. The database provides the spatial context and metadata for each sample, allowing the user to evaluate the suitability of the sample for their specific study. In addition, it allows users to upload and share their own data sets and data products, which will enhance collaboration across the different research fields. This article describes the sampling and analytical methods used to generate the data set and how to use and access the data set through the IRHUM database. Any interpretation of the isotope data set is outside the scope of this publication.

  8. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  9. Determinants in the development of advanced nursing practice: a case study of primary-care settings in Hong Kong.

    PubMed

    Twinn, Sheila; Thompson, David R; Lopez, Violeta; Lee, Diana T F; Shiu, Ann T Y

    2005-01-01

    Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.

  10. On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.

    PubMed

    Haneuse, Sebastien; Rivera-Rodriguez, Claudia

    2018-01-01

    In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.

  11. Determination of Slake Durability Index (Sdi) Values on Different Shape of Laminated Marl Samples

    NASA Astrophysics Data System (ADS)

    Ankara, Hüseyin; Çiçek, Fatma; Talha Deniz, İsmail; Uçak, Emre; Yerel Kandemir, Süheyla

    2016-10-01

    The slake durability index (SDI) test is widely used to determine the disintegration characteristic of the weak and clay-bearing rocks in geo-engineering problems. However, due to the different shapes of sample pieces, such as, irregular shapes displayed mechanical breakages in the slaking process, the SDI test has some limitations that affect the index values. In addition, shape and surface roughness of laminated marl samples have a severe influence on the SDI. In this study, a new sample preparation method called Pasha Method was used to prepare spherical specimens from the laminated marl collected from Seyitomer collar (SLI). Moreover the SDI tests were performed on equal size and weight specimens: three sets with different shapes were used. The three different sets were prepared as the test samples which had sphere shape, parallel to the layers in irregular shape, and vertical to the layers in irregular shape. Index values were determined for the three different sets subjected to the SDI test for 4 cycles. The index values at the end of fourth cycle were found to be 98.43, 98.39 and 97.20 %, respectively. As seen, the index values of the sphere sample set were found to be higher than irregular sample sets.

  12. Development of a universal metabolome-standard method for long-term LC-MS metabolome profiling and its application for bladder cancer urine-metabolite-biomarker discovery.

    PubMed

    Peng, Jun; Chen, Yi-Ting; Chen, Chien-Lun; Li, Liang

    2014-07-01

    Large-scale metabolomics study requires a quantitative method to generate metabolome data over an extended period with high technical reproducibility. We report a universal metabolome-standard (UMS) method, in conjunction with chemical isotope labeling liquid chromatography-mass spectrometry (LC-MS), to provide long-term analytical reproducibility and facilitate metabolome comparison among different data sets. In this method, UMS of a specific type of sample labeled by an isotope reagent is prepared a priori. The UMS is spiked into any individual samples labeled by another form of the isotope reagent in a metabolomics study. The resultant mixture is analyzed by LC-MS to provide relative quantification of the individual sample metabolome to UMS. UMS is independent of a study undertaking as well as the time of analysis and useful for profiling the same type of samples in multiple studies. In this work, the UMS method was developed and applied for a urine metabolomics study of bladder cancer. UMS of human urine was prepared by (13)C2-dansyl labeling of a pooled sample from 20 healthy individuals. This method was first used to profile the discovery samples to generate a list of putative biomarkers potentially useful for bladder cancer detection and then used to analyze the verification samples about one year later. Within the discovery sample set, three-month technical reproducibility was examined using a quality control sample and found a mean CV of 13.9% and median CV of 9.4% for all the quantified metabolites. Statistical analysis of the urine metabolome data showed a clear separation between the bladder cancer group and the control group from the discovery samples, which was confirmed by the verification samples. Receiver operating characteristic (ROC) test showed that the area under the curve (AUC) was 0.956 in the discovery data set and 0.935 in the verification data set. These results demonstrated the utility of the UMS method for long-term metabolomics and discovering potential metabolite biomarkers for diagnosis of bladder cancer.

  13. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  14. INCORPORATING PRIOR KNOWLEDGE IN ENVIRONMENTAL SAMPLING: RANKED SET SAMPLING AND OTHER DOUBLE SAMPLING PROCEDURES

    EPA Science Inventory

    Environmental sampling can be difficult and expensive to carry out. Those taking the samples would like to integrate their knowledge of the system of study or their judgment about the system into the sample selection process to decrease the number of necessary samples. However,...

  15. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    PubMed

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  16. Evaluation of Surface Sampling for Bacillus Spores Using ...

    EPA Pesticide Factsheets

    Journal Article In this study, commercially-available domestic cleaning robots were evaluated for spore surface sampling efficiency on common indoor surfaces. The current study determined the sampling efficiency of each robot, without modifying the sensors, algorithms, or logics set by the manufacturers.

  17. Mid-infrared spectroscopy combined with chemometrics to detect Sclerotinia stem rot on oilseed rape (Brassica napus L.) leaves.

    PubMed

    Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun

    2017-01-01

    Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.

  18. Association of High Myopia with Crystallin Beta A4 (CRYBA4) Gene Polymorphisms in the Linkage-Identified MYP6 Locus

    PubMed Central

    Ho, Daniel W. H.; Yap, Maurice K. H.; Ng, Po Wah; Fung, Wai Yan; Yip, Shea Ping

    2012-01-01

    Background Myopia is the most common ocular disorder worldwide and imposes tremendous burden on the society. It is a complex disease. The MYP6 locus at 22 q12 is of particular interest because many studies have detected linkage signals at this interval. The MYP6 locus is likely to contain susceptibility gene(s) for myopia, but none has yet been identified. Methodology/Principal Findings Two independent subject groups of southern Chinese in Hong Kong participated in the study an initial study using a discovery sample set of 342 cases and 342 controls, and a follow-up study using a replication sample set of 316 cases and 313 controls. Cases with high myopia were defined by spherical equivalent ≤ -8 dioptres and emmetropic controls by spherical equivalent within ±1.00 dioptre for both eyes. Manual candidate gene selection from the MYP6 locus was supported by objective in silico prioritization. DNA samples of discovery sample set were genotyped for 178 tagging single nucleotide polymorphisms (SNPs) from 26 genes. For replication, 25 SNPs (tagging or located at predicted transcription factor or microRNA binding sites) from 4 genes were subsequently examined using the replication sample set. Fisher P value was calculated for all SNPs and overall association results were summarized by meta-analysis. Based on initial and replication studies, rs2009066 located in the crystallin beta A4 (CRYBA4) gene was identified to be the most significantly associated with high myopia (initial study: P = 0.02; replication study: P = 1.88e-4; meta-analysis: P = 1.54e-5) among all the SNPs tested. The association result survived correction for multiple comparisons. Under the allelic genetic model for the combined sample set, the odds ratio of the minor allele G was 1.41 (95% confidence intervals, 1.21-1.64). Conclusions/Significance A novel susceptibility gene (CRYBA4) was discovered for high myopia. Our study also signified the potential importance of appropriate gene prioritization in candidate selection. PMID:22792142

  19. BOREAS TGB-5 Dissolved Organic Carbon Data from NSA Beaver Ponds

    NASA Technical Reports Server (NTRS)

    Bourbonniere, Rick; Hall, Forrest G. (Editor); Conrad, Sara K. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Trace Gas Biogeochemistry (BOREAS TGB-5) team collected several data sets related to carbon and trace gas fluxes and concentrations in the Northern Study Area (NSA). This data set contains concentrations of dissolved organic and inorganic carbon species from water samples collected at various NSA sites. In particular, this set covers the NSA Tower Beaver Pond Site and the NSA Gillam Road Beaver Pond Site, including data from all visits to open water sampling locations during the BOREAS field campaigns from April to September 1994. The data are provided in tabular ASCII files.

  20. Use of Fourier-transform infrared spectroscopy to quantify immunoglobulin G concentrations in alpaca serum.

    PubMed

    Burns, J; Hou, S; Riley, C B; Shaw, R A; Jewett, N; McClure, J T

    2014-01-01

    Rapid, economical, and quantitative assays for measurement of camelid serum immunoglobulin G (IgG) are limited. In camelids, failure of transfer of maternal immunoglobulins has a reported prevalence of up to 20.5%. An accurate method for quantifying serum IgG concentrations is required. To develop an infrared spectroscopy-based assay for measurement of alpaca serum IgG and compare its performance to the reference standard radial immunodiffusion (RID) assay. One hundred and seventy-five privately owned, healthy alpacas. Eighty-two serum samples were collected as convenience samples during routine herd visits whereas 93 samples were recruited from a separate study. Serum IgG concentrations were determined by RID assays and midinfrared spectra were collected for each sample. Fifty samples were set aside as the test set and the remaining 125 training samples were employed to build a calibration model using partial least squares (PLS) regression with Monte Carlo cross validation to determine the optimum number of PLS factors. The predictive performance of the calibration model was evaluated by the test set. Correlation coefficients for the IR-based assay were 0.93 and 0.87, respectively, for the entire data set and test set. Sensitivity in the diagnosis of failure of transfer of passive immunity (FTPI) ([IgG] <1,000 mg/dL) was 71.4% and specificity was 100% for the IR-based method (test set) as gauged relative to the RID reference method assay. This study indicated that infrared spectroscopy, in combination with chemometrics, is an effective method for measurement of IgG in alpaca serum. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  1. A large volume particulate and water multi-sampler with in situ preservation for microbial and biogeochemical studies

    NASA Astrophysics Data System (ADS)

    Breier, J. A.; Sheik, C. S.; Gomez-Ibanez, D.; Sayre-McCord, R. T.; Sanger, R.; Rauch, C.; Coleman, M.; Bennett, S. A.; Cron, B. R.; Li, M.; German, C. R.; Toner, B. M.; Dick, G. J.

    2014-12-01

    A new tool was developed for large volume sampling to facilitate marine microbiology and biogeochemical studies. It was developed for remotely operated vehicle and hydrocast deployments, and allows for rapid collection of multiple sample types from the water column and dynamic, variable environments such as rising hydrothermal plumes. It was used successfully during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Suspended Particulate Rosette V2 large volume multi-sampling system allows for the collection of 14 sample sets per deployment. Each sample set can include filtered material, whole (unfiltered) water, and filtrate. Suspended particulate can be collected on filters up to 142 mm in diameter and pore sizes down to 0.2 μm. Filtration is typically at flowrates of 2 L min-1. For particulate material, filtered volume is constrained only by sampling time and filter capacity, with all sample volumes recorded by digital flowmeter. The suspended particulate filter holders can be filled with preservative and sealed immediately after sample collection. Up to 2 L of whole water, filtrate, or a combination of the two, can be collected as part of each sample set. The system is constructed of plastics with titanium fasteners and nickel alloy spring loaded seals. There are no ferrous alloys in the sampling system. Individual sample lines are prefilled with filtered, deionized water prior to deployment and remain sealed unless a sample is actively being collected. This system is intended to facilitate studies concerning the relationship between marine microbiology and ocean biogeochemistry.

  2. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  3. Standard Specimen Reference Set: Breast Cancer and Imaging — EDRN Public Portal

    Cancer.gov

    The primary objective of this study is to assemble a well-characterized set of blood specimens and images to test biomarkers that, in conjunction with mammography, can detect and discriminate breast cancer. These samples will be divided to provide “sets” of specimens that can be tested in a number of different laboratories. Since tests will be performed on the same sets of samples, the data will be directly comparable and decisions regarding which biomarker or set of biomarkers have value in breast cancer detection can be made. These sets will reside at a National Cancer Institute facility at Frederick, MD.

  4. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN BLANK SAMPLES

    EPA Science Inventory

    The Pesticide Metabolites in Blank Samples data set contains the analytical results of measurements of up to 4 pesticide metabolites in 3 blank samples from 3 households. Measurements were made in blank samples of urine. Blank samples were used to assess the potential for sampl...

  5. Detection of fumonisin producing Fusarium verticillioides in paddy (Oryza sativa L.) using polymerase chain reaction (PCR)

    PubMed Central

    Maheshwar, P.K.; Moharram, S. Ahmed; Janardhana, G.R.

    2009-01-01

    The study reports the occurrence of fumonisin producing Fusarium verticillioides in 90 samples of stored paddy (Oryza sativa L.) collected from different geographical regions of Karnataka, India. Fumonisin producing F. verticillioides was identified based on micromorphological characteristics and PCR using two sets of primers. One set of primers was F. verticillioides species specific, which selectively amplified the intergenic space region of rDNA. The other set of primers was specific to fumonisin producing F. verticillioides. Eight paddy samples were positive for F. verticillioides. Eleven isolates obtained from these samples were capable of producing fumonisin. PMID:24031332

  6. Systematic investigation of the relationship between high myopia and polymorphisms of the MMP2, TIMP2, and TIMP3 genes by a DNA pooling approach.

    PubMed

    Leung, Kim Hung; Yiu, Wai Chi; Yap, Maurice K H; Ng, Po Wah; Fung, Wai Yan; Sham, Pak Chung; Yip, Shea Ping

    2011-06-01

    This study examined the relationship between high myopia and three myopia candidate genes--matrix metalloproteinase 2 (MMP2) and tissue inhibitor of metalloproteinase-2 and -3 (TIMP2 and TIMP3)--involved in scleral remodeling. Recruited for the study were unrelated adult Han Chinese who were high myopes (spherical equivalent, ≤ -6.0 D in both eyes; cases) and emmetropes (within ±1.0 D in both eyes; controls). Sample set 1 had 300 cases and 300 controls, and sample set 2 had 356 cases and 354 controls. Forty-nine tag single-nucleotide polymorphisms (SNPs) were selected from these candidate genes. The first stage was an initial screen of six case pools and six control pools constructed from sample set 1, each pool consisting of 50 distinct subjects of the same affection status. In the second stage, positive SNPs from the first stage were confirmed by genotyping individual samples forming the DNA pools. In the third stage, positive SNPs from stage 2 were replicated, with sample set 2 genotyped individually. Of the 49 SNPs screened by DNA pooling, three passed the lenient threshold of P < 0.10 (nested ANOVA) and were followed up by individual genotyping. Of the three SNPs genotyped, two TIMP3 SNPs were found to be significantly associated with high myopia by single-marker or haplotype analysis. However, the initial positive results could not be replicated by sample set 2. MMP2, TIPM2, and TIMP3 genes were not associated with high myopia in this Chinese sample and hence are unlikely to play a major role in the genetic susceptibility to high myopia.

  7. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  8. Methods to characterize environmental settings of stream and groundwater sampling sites for National Water-Quality Assessment

    USGS Publications Warehouse

    Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.

    2012-01-01

    Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.

  9. Exploring Sampling in the Detection of Multicategory EEG Signals

    PubMed Central

    Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun

    2015-01-01

    The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705

  10. [Do different survey settings influence the prevalence of symptoms? A methodological comparison using the Youth Self-Report].

    PubMed

    Prüss, Ulrike; von Widdern, Susanne; von Ferber, Christian

    2005-10-01

    The self-reported emotional and behavioural disorders among adolescents were assessed by the Youth Self-Report (YSR). The YSR was administered either in households or in classrooms. The goal of the study was to prove whether these different settings affect the prevalence rates of symptoms reported in the YSR. Mean scores and standard deviations of problem scales of two classroom samples and one household sample that was generally used as a reference were compared. The data were also compared with two classroom samples from Sweden and Greece. Statistical analyses were performed by means of t-test (unpaired), the evaluation of the magnitude of the effects by means of Cohen's criteria. Classroom samples detected a significantly higher prevalence of symptoms than did household samples. This is the case for almost all of the problem scales in the YSR. The result of our study supports the finding that the setting of surveys that use self-administered questionnaires in classrooms themselves affect the prevalence of self-reported symptoms assessed by the YSR. The results of surveys may be influenced, to a much greater degree than previously thought, by the settings in which they are administered. Further research is needed to identify the specific influences that differ for surveys administered at home, respectively at school.

  11. Evaluation of the pre-posterior distribution of optimized sampling times for the design of pharmacokinetic studies.

    PubMed

    Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John

    2012-01-01

    Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.

  12. Influence of centrifuge brake on residual platelet count and routine coagulation tests in citrated plasma.

    PubMed

    Daves, Massimo; Giacomuzzi, Katia; Tagnin, Enrico; Jani, Erika; Adcock Funk, Dorothy M; Favaloro, Emmanuel J; Lippi, Giuseppe

    2014-04-01

    Sample centrifugation is an essential step in the coagulation laboratory, as clotting tests are typically performed on citrated platelet (PLT) poor plasma (PPP). Nevertheless, no clear indication has been provided as to whether centrifugation of specimens should be performed with the centrifuge brake set to on or off. Fifty consecutive sodium citrate anticoagulated samples were collected and divided into two aliquots. The former was centrifuged as for Clinical Laboratory Standards Institute (CLSI) guidelines with the centrifuge brake set to on, whereas the latter was centrifuged again as for CLSI guidelines, but with the brake set to off. In the PPP of all samples, a PLT count was performed, followed by the analysis of activated partial thromboplastin time (APTT), prothrombin time (PT) and fibrinogen (FBG). The PLT count after samples centrifugation was substantially reduced, either with centrifuge brake set to on or off (5 ± 1 versus 3 ± 1 × 10/l; P = 0.009). The frequency of samples exceeding a PLT count less than 10 × 10/l was nearly double in samples centrifuged with the brake on than in those with the brake off (14 versus 8%; P < 0.01). Although no significant difference was found for APTT values, PT was slightly prolonged using the centrifuge brake set to on (mean bias 0.2 s; P < 0.001). FBG values were also significantly higher using the centrifuge brake set to on (mean bias 0.29 g/l; P < 0.001). The results of this study indicate that sample centrifugation for routine coagulation testing should be preferably performed with the centrifuge brake set to off for providing a better quality specimen.

  13. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicate Samples data set contains the analytical results of measurements of up to 2 metals in 172 replicate (duplicate) samples from 86 households. Measurements were made in samples of blood. Duplicate samples for a small percentage of the total number of sample...

  14. NHEXAS PHASE I ARIZONA STUDY--METALS IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dermal Wipes data set contains analytical results for measurements of up to 11 metals in 179 dermal wipe samples over 179 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The sampling per...

  15. NHEXAS PHASE I ARIZONA STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 6 metals in 176 urine samples over 176 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The sample consists of the fir...

  16. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 38 control samples (spikes) from 18 households. Measurements were made in spiked samples of dust, food, beverages, blood, urine, and dermal wipe residue. Spiked samples we...

  17. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicate Samples data set contains the analytical results of measurements of up to 27 metals in 133 replicate (duplicate) samples from 62 households. Measurements were made in samples of soil, blood, tap water, and drinking water. Duplicate samples for a small pe...

  18. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN REPLICATES

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs in 204 duplicate (replicate) samples. Measurements were made for up to 23 VOCs in samples of air, water, and blood. Duplicate samples (samples collected along with or next to the original samples) were collected t...

  19. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATES

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 490 duplicate (replicate) samples and for particles in 130 duplicate samples. Measurements were made for up to 11 metals in samples of air, dust, water, blood, and urine. Duplicate samples (samples collected ...

  20. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticide Metabolites in Spike Samples data set contains the analytical results of measurements of up to 4 pesticide metabolites in 3 control samples (spikes) from 3 households. Measurements were made in spiked samples of urine. Spiked samples were used to assess recovery o...

  1. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  2. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  3. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  4. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  5. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  6. Comparison of various primer sets for detection of Toxoplasma gondii by polymerase chain reaction in fetal tissues from naturally aborted foxes.

    PubMed

    Smielewska-Loś, E

    2003-01-01

    Tissues from 4 aborted polar foxes (3 samples of brain and 4 samples of liver) were selected for Toxoplasma gondii PCR assay. Positive results of serological tests of mothers and immunofluorescence test (IFT) of fetal organ smears were the criteria of sample selection. Five sets of primers designed from B1 gene and ITS1 sequences of T. gondii were used for detection of the parasite in fetal fox tissues. All used primer sets successfully amplified T. gondii DNA in PCR from organs which were positive by IFT. Single tube nested PCR also showed positive result from a sample negative by IFT, but this product was not confirmed. The studies showed usefullness of PCR for routine diagnosis of toxoplasmosis in carnivores.

  7. The CAMELS data set: catchment attributes and meteorology for large-sample studies

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Newman, Andrew J.; Mizukami, Naoki; Clark, Martyn P.

    2017-10-01

    We present a new data set of attributes for 671 catchments in the contiguous United States (CONUS) minimally impacted by human activities. This complements the daily time series of meteorological forcing and streamflow provided by Newman et al. (2015b). To produce this extension, we synthesized diverse and complementary data sets to describe six main classes of attributes at the catchment scale: topography, climate, streamflow, land cover, soil, and geology. The spatial variations among basins over the CONUS are discussed and compared using a series of maps. The large number of catchments, combined with the diversity of the attributes we extracted, makes this new data set well suited for large-sample studies and comparative hydrology. In comparison to the similar Model Parameter Estimation Experiment (MOPEX) data set, this data set relies on more recent data, it covers a wider range of attributes, and its catchments are more evenly distributed across the CONUS. This study also involves assessments of the limitations of the source data sets used to compute catchment attributes, as well as detailed descriptions of how the attributes were computed. The hydrometeorological time series provided by Newman et al. (2015b, https://doi.org/10.5065/D6MW2F4D) together with the catchment attributes introduced in this paper (https://doi.org/10.5065/D6G73C3Q) constitute the freely available CAMELS data set, which stands for Catchment Attributes and MEteorology for Large-sample Studies.

  8. SPRUCE Whole Ecosystem Warming (WEW) Peat Water Content and Temperature Profiles for Experimental Plot Cores Beginning June 2016

    DOE Data Explorer

    Gutknecht, J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Kluber, L. A. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Schadt, C. W. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.

    2016-06-01

    This data set provides the peat water content and peat temperature at time of sampling for peat cores collected before and during the SPRUCE Whole Ecosystem Warming (WEW) study. Cores for the current data set were collected during the following bulk peat sampling events: 13 June 2016 and 23 August 2016. Over time, this dataset will be updated with each new major bulk peat sampling event, and dates/methods will be updated accordingly.

  9. Teaching Manual Signs to Adults with Mental Retardation Using Matching-to-Sample Procedures and Stimulus Equivalence

    ERIC Educational Resources Information Center

    Elias, N. C.; Goyos, C.; Saunders, M.; Saunders, R.

    2008-01-01

    The objective of this study was to teach manual signs through an automated matching-to-sample procedure and to test for the emergence of new conditional relations and imitative behaviors. Seven adults with mild to severe mental retardation participated. Four were also hearing impaired. Relations between manual signs (set A) and pictures (set B)…

  10. Long Term Value of Apollo Samples: How Fundamental Understanding of a Body Takes Decades of Study

    NASA Astrophysics Data System (ADS)

    Borg, L. E.; Gaffney, A. M.; Kruijer, T. K.; Sio, C. K.

    2018-04-01

    Fundamental understanding of a body evolves as more sophisticated technology is applied to a progressively better understood sample set. Sample diversity is required to understand many geologic processes.

  11. Molecular diagnosis of cryptococcal meningitis in cerebrospinal fluid: comparison of primer sets for Cryptococcus neoformans and Cryptococcus gattii species complex.

    PubMed

    Martins, Marilena dos Anjos; Brighente, Kate Bastos Santos; Matos, Terezinha Aparecida de; Vidal, Jose Ernesto; Hipólito, Daise Damaris Carnietto de; Pereira-Chioccola, Vera Lucia

    2015-01-01

    This study evaluated the use of polymerase chain reaction for cryptococcal meningitis diagnosis in clinical samples. The sensitivity and specificity of the methodology were evaluated using eight Cryptococcus neoformans/C. gattii species complex reference strains and 165 cerebrospinal fluid samples from patients with neurological diseases divided into two groups: 96 patients with cryptococcal meningitis and AIDS; and 69 patients with other neurological opportunistic diseases (CRL/AIDS). Two primer sets were tested (CN4-CN5 and the multiplex CNa70S-CNa70A/CNb49S-CNb-49A that amplify a specific product for C. neoformans and another for C. gattii). CN4-CN5 primer set was positive in all Cryptococcus standard strains and in 94.8% in DNA samples from cryptococcal meningitis and AIDS group. With the multiplex, no 448-bp product of C. gattii was observed in the clinical samples of either group. The 695bp products of C. neoformans were observed only in 64.6% of the cryptococcal meningitis and AIDS group. This primer set was negative for two standard strains. The specificity based on the negative samples from the CTL/AIDS group was 98.5% in both primer sets. These data suggest that the CN4/CN5 primer set was highly sensitive for the identification of C. neoformans/C. gattii species complex in cerebrospinal fluid samples from patients with clinical suspicion of cryptococcal meningitis. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.

  12. Guided goal setting: effectiveness in a dietary and physical activity intervention with low-income adolescents.

    PubMed

    Shilts, Mical Kay; Horowitz, Marcel; Townsend, Marilyn S

    2009-01-01

    Determining the effectiveness of the guided goal setting strategy on changing adolescents' dietary and physical activity self-efficacy and behaviors. Adolescents were individually assigned to treatment (intervention with guided goal setting) or control conditions (intervention without guided goal setting) with data collected before and after the education intervention. Urban middle school in a low-income community in Central California. Ethnically diverse middle school students (n = 94, 55% male) who were participants of a USDA nutrition education program. Driven by the Social Cognitive Theory, the intervention targeted dietary and physical activity behaviors of adolescents. Dietary self-efficacy and behavior; physical activity self-efficacy and behavior; goal effort and spontaneous goal setting. ANCOVA and path analysis were performed using the full sample and a sub-sample informed by Locke's recommendations (accounting for goal effort and spontaneous goal setting). No significant differences were found between groups using the full sample. Using the sub-sample, greater gains in dietary behavior (p < .05), physical activity behavior (p < .05), and physical activity self-efficacy (p < .05) were made by treatment participants compared to control participants. Change in physical activity behaviors was mediated by self-efficacy. Accounting for goal effort and spontaneous goal setting, this study provides some evidence that the use of guided goal setting with adolescents may be a viable strategy to promote dietary and physical activity behavior change.

  13. DNA Everywhere. A Guide for Simplified Environmental Genomic DNA Extraction Suitable for Use in Remote Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabrielle N. Pecora; Francine C. Reid; Lauren M. Tom

    2016-05-01

    Collecting field samples from remote or geographically distant areas can be a financially and logistically challenging. With participation of a local organization where the samples are originated from, gDNA samples can be extracted from the field and shipped to a research institution for further processing and analysis. The ability to set up gDNA extraction capabilities in the field can drastically reduce cost and time when running long-term microbial studies with a large sample set. The method outlined here has developed a compact and affordable method for setting up a “laboratory” and extracting and shipping gDNA samples from anywhere in themore » world. This white paper explains the process of setting up the “laboratory”, choosing and training individuals with no prior scientific experience how to perform gDNA extractions and safe methods for shipping extracts to any research institution. All methods have been validated by the Andersen group at Lawrence Berkeley National Laboratory using the Berkeley Lab PhyloChip.« less

  14. Cytotoxic and Inflammatory Potential of Air Samples from Occupational Settings with Exposure to Organic Dust

    PubMed Central

    Viegas, Susana; Caetano, Liliana Aranha; Korkalainen, Merja; Faria, Tiago; Pacífico, Cátia; Carolino, Elisabete; Quintal Gomes, Anita; Viegas, Carla

    2017-01-01

    Organic dust and related microbial exposures are the main inducers of several respiratory symptoms. Occupational exposure to organic dust is very common and has been reported in diverse settings. In vitro tests using relevant cell cultures can be very useful for characterizing the toxicity of complex mixtures present in the air of occupational environments such as organic dust. In this study, the cell viability and the inflammatory response, as measured by the production of pro-inflammatory cytokines tumor necrosis factor-α (TNFα) and interleukin-1 β (IL-1β), were determined in human macrophages derived from THP-1 monocytic cells. These cells were exposed to air samples from five occupational settings known to possess high levels of contamination of organic dust: poultry and swine feed industries, waste sorting, poultry production and slaughterhouses. Additionally, fungi and particle contamination of those settings was studied to better characterize the organic dust composition. All air samples collected from the assessed workplaces caused both cytotoxic and pro-inflammatory effects. The highest responses were observed in the feed industry, particularly in swine feed production. This study emphasizes the importance of measuring the organic dust/mixture effects in occupational settings and suggests that differences in the organic dust content may result in differences in health effects for exposed workers. PMID:29051440

  15. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dermal Wipe Samples data set contains analytical results for measurements of up to 8 pesticides in 40 dermal wipe samples over 40 households. Each sample was collected from the primary respondent within each household. The sampling period occurred on the last ...

  16. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN REPLICATE SAMPLES

    EPA Science Inventory

    The Pesticides in Replicates data set contains the analytical results of measurements of up to 10 pesticides in 68 replicate (duplicate) samples from 41 households. Measurements were made in samples of indoor air, dust, soil, drinking water, food, and beverages. Duplicate sampl...

  17. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Blood Serum data set contains analytical results for measurements of up to 17 pesticides in 358 blood samples over 79 households. Each sample was collected via a venous sample from the primary respondent within each household by a phlebotomist. Samples were ge...

  18. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 15 control samples (spikes) from 11 households. Measurements were made in spiked samples of dust, food, and dermal wipe residue. Spiked samples were used to assess recover...

  19. Relative congener scaling of Polychlorinated dibenzo-p-dioxins and dibenzofurans to estimate building fire contributions in air, surface wipes, and dust samples.

    PubMed

    Pleil, Joachim D; Lorber, Matthew N

    2007-11-01

    The United States Environmental Protection Agency collected ambient air samples in lower Manhattan for about 9 months following the September 11, 2001 World Trade Center (WTC) attacks. Measurements were made of a host of airborne contaminants including volatile organic compounds, polycyclic aromatic hydrocarbons, asbestos, lead, and other contaminants of concern. The present study focuses on the broad class of polychlorinated dibenzo-p-dioxins (CDDs) and dibenzofurans (CDFs) with specific emphasis on the 17 CDD/CDF congeners that exhibit mammalian toxicity. This work is a statistical study comparing the internal patterns of CDD/CDFs using data from an unambiguous fire event (WTC) and other data sets to help identify their sources. A subset of 29 samples all taken between September 16 and October 31, 2001 were treated as a basis set known to be heavily impacted by the WTC building fire source. A second basis set was created using data from Los Angeles and Oakland, CA as published by the California Air Resources Board (CARB) and treated as the archetypical background pattern for CDD/CDFs. The CARB data had a congener profile appearing similar to background air samples from different locations in America and around the world and in different matrices, such as background soils. Such disparate data would normally be interpreted with a qualitative pattern recognition based on congener bar graphs or other forms of factor or cluster analysis that group similar samples together graphically. The procedure developed here employs aspects of those statistical methods to develop a single continuous output variable per sample. Specifically, a form of variance structure-based cluster analysis is used to group congeners within samples to reduce collinearity in the basis sets, new variables are created based on these groups, and multivariate regression is applied to the reduced variable set to determine a predictive equation. This equation predicts a value for an output variable, OPT: the predicted value of OPT is near zero (0.00) for a background congener profile and near one (1.00) forthe profile characterized by the WTC air profile. Although this empirical method is calibrated with relatively small sets of airborne samples, it is shown to be generalizable to other WTC, fire source, and background air samples as well as other sample matrices including soils, window films and other dust wipes, and bulk dusts. However, given the limited data set examined, the method does not allow further discrimination between the WTC data and the other fire sources. This type of analysis is demonstrated to be useful for complex trace-level data sets with limited data and some below-detection entries.

  20. A Trade Study and Metric for Penetration and Sampling Devices for Possible Use on the NASA 2003 and 2005 Mars Sample Return Missions

    NASA Technical Reports Server (NTRS)

    McConnell, Joshua B.

    2000-01-01

    The scientific exploration of Mars will require the collection and return of subterranean samples to Earth for examination. This necessitates the use of some type of device or devices that possesses the ability to effectively penetrate the Martian surface, collect suitable samples and return them to the surface in a manner consistent with imposed scientific constraints. The first opportunity for such a device will occur on the 2003 and 2005 Mars Sample Return missions, being performed by NASA. This paper reviews the work completed on the compilation of a database containing viable penetrating and sampling devices, the performance of a system level trade study comparing selected devices to a set of prescribed parameters and the employment of a metric for the evaluation and ranking of the traded penetration and sampling devices, with respect to possible usage on the 03 and 05 sample return missions. The trade study performed is based on a select set of scientific, engineering, programmatic and socio-political criterion. The use of a metric for the various penetration and sampling devices will act to expedite current and future device selection.

  1. A Comparison of the Social Competence of Children with Moderate Intellectual Disability in Inclusive versus Segregated School Settings

    ERIC Educational Resources Information Center

    Hardiman, Sharon; Guerin, Suzanne; Fitzsimons, Elaine

    2009-01-01

    This is the first study to compare the social competence of children with moderate intellectual disability in inclusive versus segregated school settings in the Republic of Ireland. A convenience sample was recruited through two large ID services. The sample comprised 45 children across two groups: Group 1 (n = 20; inclusive school) and Group 2 (n…

  2. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  3. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    PubMed

    Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello

    2012-01-01

    Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  4. CTEPP STANDARD OPERATING PROCEDURE FOR SETTING UP A HOUSEHOLD SAMPLING SCHEDULE (SOP-2.10)

    EPA Science Inventory

    This SOP describes the method for scheduling study subjects for field sampling activities in North Carolina (NC) and Ohio (OH). There are three field sampling teams with two staff members on each team. Two field sampling teams collect the field data simultaneously. A third fiel...

  5. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN BLANKS

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs in 88 blank samples. Measurements were made for up to 23 VOCs in blank samples of air, water, and blood. Blank samples were used to assess the potential for sample contamination during collection, storage, shipmen...

  6. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN BLANK SAMPLES

    EPA Science Inventory

    The Pesticides in Blank Samples data set contains the analytical results of measurements of up to 4 pesticides in 43 blank samples from 29 households. Measurements were made in blank samples of dust, indoor and outdoor air, food and beverages, blood, urine, and dermal wipe resid...

  7. NHEXAS PHASE I MARYLAND STUDY--METALS IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dermal Wipe Samples data set contains analytical results for measurements of up to 4 metals in 343 dermal wipe samples over 80 households. Each sample was collected from the primary respondent within each household. The sampling period occurred on the first day of...

  8. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATE SAMPLES

    EPA Science Inventory

    The Metals in Replicates data set contains the analytical results of measurements of up to 11 metals in 88 replicate (duplicate) samples from 52 households. Measurements were made in samples of indoor and outdoor air, drinking water, food, and beverages. Duplicate samples for a...

  9. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...

  10. NHEXAS PHASE I MARYLAND STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 374 blood samples over 80 households. Each sample was collected via a venous sample from the primary respondent within each household by a phlebotomist. Samples were generally drawn o...

  11. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN BLANKS

    EPA Science Inventory

    The Pesticide Metabolites in Blanks data set contains the analytical results of measurements of up to 4 pesticide metabolites in 14 blank samples from 13 households. Measurements were made in blank samples of urine. Blank samples were used to assess the potential for sample con...

  12. A STRINGENT COMPARISON OF SAMPLING AND ANALYSIS METHODS FOR VOCS IN AMBIENT AIR

    EPA Science Inventory

    A carefully designed study was conducted during the summer of 1998 to simultaneously collect samples of ambient air by canisters and compare the analysis results to direct sorbent preconcentration results taken at the time of sample collection. A total of 32 1-h sample sets we...

  13. Goal setting as a strategy for dietary and physical activity behavior change: a review of the literature.

    PubMed

    Shilts, Mical Kay; Horowitz, Marcel; Townsend, Marilyn S

    2004-01-01

    Estimate effectiveness of goal setting for nutrition and physical activity behavior change, review the effect of goal-setting characteristics on behavior change, and investigate effectiveness of interventions containing goal setting. For this review, a literature search was conducted for the period January 1977 through December 2003 that included a Current Contents, Biosis Previews, Medline, PubMed, PsycINFO, and ERIC search of databases and a reference list search. Key words were goal, goal setting, nutrition, diet, dietary, physical activity, exercise, behavior change, interventions, and fitness. The search identified 144 studies, of which 28 met inclusion criteria for being published in a peer reviewed journal and using goal setting in an intervention to modify dietary or physical activity behaviors. Excluded from this review were those studies that (1) evaluated goal setting cross-sectionally without an intervention; (2) used goal setting for behavioral disorders, to improve academic achievement, or in sports performance; (3) were reviews. The articles were categorized by target audience and secondarily by research focus. Data extracted included outcome measure, research rating, purpose, sample, sample description, assignment, findings, and goal-setting support. Thirteen of the 23 adult studies used a goal-setting effectiveness study design and eight produced positive results supporting goal setting. No adolescent or child studies used this design. The results were inconclusive for the studies investigating goal-setting characteristics (n = 7). Four adult and four child intervention evaluation studies showed positive outcomes. No studies reported power calculations, and only 32% of the studies were rated as fully supporting goal setting. Goal setting has shown some promise in promoting dietary and physical activity behavior change among adults, but methodological issues still need to be resolved. The literature with adolescents and children is limited, and the authors are not aware of any published studies with this audience investigating the independent effect of goal setting on dietary or physical activity behavior. Although, goal setting is widely used with children and adolescents in nutrition interventions, its effectiveness has yet to be reported.

  14. NHEXAS PHASE I REGION 5 STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 165 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood samples were collected by venipun...

  15. NHEXAS PHASE I REGION 5 STUDY--VOCS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs (volatile organic compounds) in 145 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood sample...

  16. Measurement of Clozapine, Norclozapine, and Amisulpride in Plasma and in Oral Fluid Obtained Using 2 Different Sampling Systems.

    PubMed

    Fisher, Danielle S; Beyer, Chad; van Schalkwyk, Gerrit; Seedat, Soraya; Flanagan, Robert J

    2017-04-01

    There is a poor correlation between total concentrations of proton-accepting compounds (most basic drugs) in unstimulated oral fluid and in plasma. The aim of this study was to compare clozapine, norclozapine, and amisulpride concentrations in plasma and in oral fluid collected using commercially available collection devices [Thermo Fisher Scientific Oral-Eze and Greiner Bio-One (GBO)]. Oral-Eze and GBO samples and plasma were collected in that order from patients prescribed clozapine. Analyte concentrations were measured by liquid chromatography-tandem mass spectrometry. There were 112 participants [96 men, aged (median, range) 47 (21-65) years and 16 women, aged 44 (21-65) years]: 74 participants provided 2 sets of samples and 7 provided 3 sets (overall 2 GBO samples not collected). Twenty-three patients were co-prescribed amisulpride, of whom 17 provided 2 sets of samples and 1 provided 3 sets. The median (range) oral fluid within the GBO samples was 52 (13%-86%). Nonadherence to clozapine was identified in all 3 samples in one instance. After correction for oral fluid content, analyte concentrations in the GBO and Oral-Eze samples were poorly correlated with plasma clozapine and norclozapine (R = 0.57-0.63) and plasma amisulpride (R = 0.65-0.72). Analyte concentrations in the 2 sets of oral fluid samples were likewise poorly correlated (R = 0.68-0.84). Mean (SD) plasma clozapine and norclozapine were 0.60 (0.46) and 0.25 (0.21) mg/L, respectively. Mean clozapine and norclozapine concentrations in the 2 sets of oral fluid samples were similar to those in plasma (0.9-1.8 times higher), that is, approximately 2- to 3-fold higher than those in unstimulated oral fluid. The mean (±SD) amisulpride concentrations (microgram per liter) in plasma (446 ± 297) and in the Oral-Eze samples (501 ± 461) were comparable and much higher than those in the GBO samples (233 ± 318). Oral fluid collected using either the GBO system or the Oral-Eze system cannot be used for quantitative clozapine and/or amisulpride therapeutic drug monitoring.

  17. Molecular diagnosis of symptomatic toxoplasmosis: a 9-year retrospective and prospective study in a referral laboratory in São Paulo, Brazil.

    PubMed

    Camilo, Lilian Muniz; Pereira-Chioccola, Vera Lucia; Gava, Ricardo; Meira-Strejevitch, Cristina da Silva; Vidal, Jose Ernesto; Brandão de Mattos, Cinara Cássia; Frederico, Fábio Batista; De Mattos, Luiz Carlos; Spegiorin, Lígia Cosentino Junqueira Franco

    Symptomatic forms of toxoplasmosis are a serious public health problem and occur in around 10-20% of the infected people. Aiming to improve the molecular diagnosis of symptomatic toxoplasmosis in Brazilian patients, this study evaluated the performance of real time PCR testing two primer sets (B1 and REP-529) in detecting Toxoplasma gondii DNA. The methodology was assayed in 807 clinical samples with known clinical diagnosis, ELISA, and conventional PCR results in a 9-year period. All samples were from patients with clinical suspicion of several features of toxoplasmosis. According to the minimum detection limit curve (in C T ), REP-529 had greater sensitivity to detect T. gondii DNA than B1. Both primer sets were retrospectively evaluated using 515 DNA from different clinical samples. The 122 patients without toxoplasmosis provided high specificity (REP-529, 99.2% and B1, 100%). From the 393 samples with positive ELISA, 146 had clinical diagnosis of toxoplasmosis and positive conventional PCR. REP-529 and B1 sensitivities were 95.9% and 83.6%, respectively. Comparison of REP-529 and B1 performances was further analyzed prospectively in 292 samples. Thus, from a total of 807 DNA analyzed, 217 (26.89%) had positive PCR with, at least one primer set and symptomatic toxoplasmosis confirmed by clinical diagnosis. REP-529 was positive in 97.23%, whereas B1 amplified only 78.80%. After comparing several samples in a Brazilian referral laboratory, this study concluded that REP-529 primer set had better performance than B1 one. These observations were based after using cases with defined clinical diagnosis, ELISA, and conventional PCR. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  18. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  19. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  20. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANK SAMPLES

    EPA Science Inventory

    The Metals in Blank Samples data set contains the analytical results of measurements of up to 27 metals in 52 blank samples. Measurements were made in blank samples of dust, indoor air, food, water, and dermal wipe residue. Blank samples were used to assess the potential for sa...

  1. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    PubMed

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  2. Mercury in fish and macroinvertebrates from New York's streams and rivers: A compendium of data sources

    USGS Publications Warehouse

    Riva-Murray, Karen; Burns, Douglas A.

    2016-01-01

    The U.S. Geological Survey has compiled a list of existing data sets, from selected sources, containing mercury (Hg) concentration data in fish and macroinvertebrate samples that were collected from flowing waters of New York State from 1970 through 2014. Data sets selected for inclusion in this report were limited to those that contain fish and (or) macroinvertebrate data that were collected across broad areas, cover relatively long time periods, and (or) were collected as part of a broader-scale (e.g. national) study or program. In addition, all data sets listed were collected, processed, and analyzed with documented methods, and contain critical sample information (e.g. fish species, fish size, Hg species) that is needed to analyze and interpret the reported Hg concentration data. Fourteen data sets, all from state or federal agencies, are listed in this report, along with selected descriptive information regarding each data source and data set contents. Together, these 14 data sets contain Hg and related data for more than 7,000 biological samples collected from more than 700 unique stream and river locations between 1970 and 2014.

  3. Detection of human papillomavirus DNA in urine. A review of the literature.

    PubMed

    Vorsters, A; Micalessi, I; Bilcke, J; Ieven, M; Bogers, J; Van Damme, P

    2012-05-01

    The detection of human papillomavirus (HPV) DNA in urine, a specimen easily obtained by a non-invasive self-sampling method, has been the subject of a considerable number of studies. This review provides an overview of 41 published studies; assesses how different methods and settings may contribute to the sometimes contradictory outcomes; and discusses the potential relevance of using urine samples in vaccine trials, disease surveillance, epidemiological studies, and specific settings of cervical cancer screening. Urine sampling, storage conditions, sample preparation, DNA extraction, and DNA amplification may all have an important impact on HPV DNA detection and the form of viral DNA that is detected. Possible trends in HPV DNA prevalence in urine could be inferred from the presence of risk factors or the diagnosis of cervical lesions. HPV DNA detection in urine is feasible and may become a useful tool but necessitates further improvement and standardization.

  4. SELDI-TOF MS of quadruplicate urine and serum samples to evaluate changes related to storage conditions.

    PubMed

    Traum, Avram Z; Wells, Meghan P; Aivado, Manuel; Libermann, Towia A; Ramoni, Marco F; Schachter, Asher D

    2006-03-01

    Proteomic profiling with SELDI-TOF MS has facilitated the discovery of disease-specific protein profiles. However, multicenter studies are often hindered by the logistics required for prompt deep-freezing of samples in liquid nitrogen or dry ice within the clinic setting prior to shipping. We report high concordance between MS profiles within sets of quadruplicate split urine and serum samples deep-frozen at 0, 2, 6, and 24 h after sample collection. Gage R&R results confirm that deep-freezing times are not a statistically significant source of SELDI-TOF MS variability for either blood or urine.

  5. International Study to Evaluate PCR Methods for Detection of Trypanosoma cruzi DNA in Blood Samples from Chagas Disease Patients

    PubMed Central

    Schijman, Alejandro G.; Bisio, Margarita; Orellana, Liliana; Sued, Mariela; Duffy, Tomás; Mejia Jaramillo, Ana M.; Cura, Carolina; Auter, Frederic; Veron, Vincent; Qvarnstrom, Yvonne; Deborggraeve, Stijn; Hijar, Gisely; Zulantay, Inés; Lucero, Raúl Horacio; Velazquez, Elsa; Tellez, Tatiana; Sanchez Leon, Zunilda; Galvão, Lucia; Nolder, Debbie; Monje Rumi, María; Levi, José E.; Ramirez, Juan D.; Zorrilla, Pilar; Flores, María; Jercic, Maria I.; Crisante, Gladys; Añez, Néstor; De Castro, Ana M.; Gonzalez, Clara I.; Acosta Viana, Karla; Yachelini, Pedro; Torrico, Faustino; Robello, Carlos; Diosque, Patricio; Triana Chavez, Omar; Aznar, Christine; Russomando, Graciela; Büscher, Philippe; Assal, Azzedine; Guhl, Felipe; Sosa Estani, Sergio; DaSilva, Alexandre; Britto, Constança; Luquetti, Alejandro; Ladzins, Janis

    2011-01-01

    Background A century after its discovery, Chagas disease still represents a major neglected tropical threat. Accurate diagnostics tools as well as surrogate markers of parasitological response to treatment are research priorities in the field. The purpose of this study was to evaluate the performance of PCR methods in detection of Trypanosoma cruzi DNA by an external quality evaluation. Methodology/Findings An international collaborative study was launched by expert PCR laboratories from 16 countries. Currently used strategies were challenged against serial dilutions of purified DNA from stocks representing T. cruzi discrete typing units (DTU) I, IV and VI (set A), human blood spiked with parasite cells (set B) and Guanidine Hidrochloride-EDTA blood samples from 32 seropositive and 10 seronegative patients from Southern Cone countries (set C). Forty eight PCR tests were reported for set A and 44 for sets B and C; 28 targeted minicircle DNA (kDNA), 13 satellite DNA (Sat-DNA) and the remainder low copy number sequences. In set A, commercial master mixes and Sat-DNA Real Time PCR showed better specificity, but kDNA-PCR was more sensitive to detect DTU I DNA. In set B, commercial DNA extraction kits presented better specificity than solvent extraction protocols. Sat-DNA PCR tests had higher specificity, with sensitivities of 0.05–0.5 parasites/mL whereas specific kDNA tests detected 5.10−3 par/mL. Sixteen specific and coherent methods had a Good Performance in both sets A and B (10 fg/µl of DNA from all stocks, 5 par/mL spiked blood). The median values of sensitivities, specificities and accuracies obtained in testing the Set C samples with the 16 tests determined to be good performing by analyzing Sets A and B samples varied considerably. Out of them, four methods depicted the best performing parameters in all three sets of samples, detecting at least 10 fg/µl for each DNA stock, 0.5 par/mL and a sensitivity between 83.3–94.4%, specificity of 85–95%, accuracy of 86.8–89.5% and kappa index of 0.7–0.8 compared to consensus PCR reports of the 16 good performing tests and 63–69%, 100%, 71.4–76.2% and 0.4–0.5, respectively compared to serodiagnosis. Method LbD2 used solvent extraction followed by Sybr-Green based Real time PCR targeted to Sat-DNA; method LbD3 used solvent DNA extraction followed by conventional PCR targeted to Sat-DNA. The third method (LbF1) used glass fiber column based DNA extraction followed by TaqMan Real Time PCR targeted to Sat-DNA (cruzi 1/cruzi 2 and cruzi 3 TaqMan probe) and the fourth method (LbQ) used solvent DNA extraction followed by conventional hot-start PCR targeted to kDNA (primer pairs 121/122). These four methods were further evaluated at the coordinating laboratory in a subset of human blood samples, confirming the performance obtained by the participating laboratories. Conclusion/Significance This study represents a first crucial step towards international validation of PCR procedures for detection of T. cruzi in human blood samples. PMID:21264349

  6. [Rapid determination of fatty acids in soybean oils by transmission reflection-near infrared spectroscopy].

    PubMed

    Song, Tao; Zhang, Feng-ping; Liu, Yao-min; Wu, Zong-wen; Suo, You-rui

    2012-08-01

    In the present research, a novel method was established for determination of five fatty acids in soybean oil by transmission reflection-near infrared spectroscopy. The optimum conditions of mathematics model of five components (C16:0, C18:0, C18:1, C18:2 and C18:3) were studied, including the sample set selection, chemical value analysis, the detection methods and condition. Chemical value was analyzed by gas chromatography. One hundred fifty eight samples were selected, 138 for modeling set, 10 for testing set and 10 for unknown sample set. All samples were placed in sample pools and scanned by transmission reflection-near infrared spectrum after sonicleaning for 10 minute. The 1100-2500 nm spectral region was analyzed. The acquisition interval was 2 nm. Modified partial least square method was chosen for calibration mode creating. Result demonstrated that the 1-VR of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.8839, 0.5830, 0.9001, 0.9776 and 0.9596, respectively. And the SECV of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.42, 0.29, 0.83, 0.46 and 0.21, respectively. The standard error of the calibration (SECV) of five fatty acids between the reference value of testing sample set and the near infrared spectrum predictive value were 0.891, 0.790, 0.900, 0.976 and 0.942, respectively. It was proved that the near infrared spectrum predictive value was linear with chemical value and the mathematical model established for fatty acids of soybean oil was feasible. For validation, 10 unknown samples were selected for analysis by near infrared spectrum. The result demonstrated that the relative standard deviation between predict value and chemical value was less than 5.50%. That was to say that transmission reflection-near infrared spectroscopy had a good veracity in analysis of fatty acids of soybean oil.

  7. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANK SAMPLES

    EPA Science Inventory

    The Metals in Blank Samples data set contains the analytical results of measurements of up to 27 metals in 82 blank samples from 26 households. Measurements were made in blank samples of dust, indoor and outdoor air, personal air, food, beverages, blood, urine, and dermal wipe r...

  8. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANKS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 205 blank samples and for particles in 64 blank samples. Measurements were made for up to 12 metals in blank samples of air, dust, soil, water, food and beverages, blood, hair, and urine. Blank samples were u...

  9. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 86 blood samples over 86 households. Each sample was collected as a venous sample from the primary respondent within each household. The samples consisted of two 3-mL tubes. The prim...

  10. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN SPIKE SAMPLES

    EPA Science Inventory

    The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...

  11. Personal clothing as a potential vector of respiratory virus transmission in childcare settings.

    PubMed

    Gralton, Jan; McLaws, Mary-Louise; Rawlinson, William D

    2015-06-01

    Previous investigations of fomite transmission have focused on the presence of pathogens on inanimate objects in clinical settings. There has been limited investigation of fomite transmission in non-clinical pediatric settings where there is a high prevalence of respiratory virus infections. Over a 5 week period, this study investigated whether the personal clothing of teachers working in childcare centers was contaminated with viral RNA, and potentially could mediate virus transmission. Matched morning and evening clothing and nasal samples were collected for 313 teacher work days (TWDs). Human rhinoviruses (hRV) RNA were detected from samples using real-time PCR. Human rhinovirus RNA was detected in clothing samples on 16 TWDs and in nasal samples on 32 TWDs. There were no TWDs when teachers provided both positive nasal and clothing samples and only three TWDs when hRV persisted on clothing for the entire day. The detection of hRV RNA was significantly predicted by self-recognition of symptomatic illness by the teacher 2 days prior to detection. These findings suggest that teachers' personal clothing in childcare settings is unlikely to facilitate the transmission of hRV. © 2015 Wiley Periodicals, Inc.

  12. Trace-element concentrations in streambed sediment across the conterminous United States

    USGS Publications Warehouse

    Rice, Karen C.

    1999-01-01

    Trace-element concentrations in 541 streambed-sediment samples collected from 20 study areas across the conterminous United States were examined as part of the National Water-Quality Assessment Program of the U.S. Geological Survey. Sediment samples were sieved and the <63-μm fraction was retained for determination of total concentrations of trace elements. Aluminum, iron, titanium, and organic carbon were weakly or not at all correlated with the nine trace elements examined:  arsenic, cadmium, chromium, copper, lead, mercury, nickel, selenium, and zinc. Four different methods of accounting for background/baseline concentrations were examined; however, normalization was not required because field sieving removed most of the background differences between samples. The sum of concentrations of trace elements characteristic of urban settings - copper, mercury, lead, and zinc - was well correlated with population density, nationwide. Median concentrations of seven trace elements (all nine examined except arsenic and selenium) were enriched in samples collected from urban settings relative to agricultural or forested settings. Forty-nine percent of the sites sampled in urban settings had concentrations of one or more trace elements that exceeded levels at which adverse biological effects could occur in aquatic biota.

  13. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  14. Molecular identification of Cryptosporidium spp. in seagulls, pigeons, dogs, and cats in Thailand

    PubMed Central

    Koompapong, Khuanchai; Mori, Hirotake; Thammasonthijarern, Nipa; Prasertbun, Rapeepun; Pintong, Ai-rada; Popruk, Supaluk; Rojekittikhun, Wichit; Chaisiri, Kittipong; Sukthana, Yaowalark; Mahittikorn, Aongart

    2014-01-01

    Zoonotic Cryptosporidium spp., particularly C. meleagridis, C. canis, and C. felis, are enteric protozoa responsible for major public health concerns around the world. To determine the spread of this parasite in Thailand, we conducted molecular identification of Cryptosporidium spp. from animal samples around the country, by collecting and investigating the feces of seagulls (Chroicocephalus brunnicephalus and Chroicocephalus ridibundus), domestic pigeons (Columba livia domestica), dogs, and cats. Seagull and pigeon samples were collected at the seaside and on the riverside to evaluate their potential for waterborne transmission. Ten pigeon samples were combined into one set, and a total of seven sets were collected. Seventy seagull samples were combined into one set, and a total of 13 sets were collected. In addition, 111 dog samples were collected from cattle farms, and 95 dog and 80 cat samples were collected from a temple. We identified C. meleagridis in pigeons, Cryptosporidium avian genotype III in seagulls, C. canis in dogs, and C. felis in cats. In the temple, the prevalence was 2.1% (2/95) for dogs and 2.5% (2/80) for cats. No Cryptosporidium was found in dog samples from cattle farms. These are the first findings of C. meleagridis in domestic pigeons, and Cryptosporidium avian genotype III in seagulls. Our study invites further molecular epidemiological investigations of Cryptosporidium in these animals and their environment to evaluate the public health risk in Thailand. PMID:25297887

  15. Spatial and temporal study of nitrate concentration in groundwater by means of coregionalization

    USGS Publications Warehouse

    D'Agostino, V.; Greene, E.A.; Passarella, G.; Vurro, M.

    1998-01-01

    Spatial and temporal behavior of hydrochemical parameters in groundwater can be studied using tools provided by geostatistics. The cross-variogram can be used to measure the spatial increments between observations at two given times as a function of distance (spatial structure). Taking into account the existence of such a spatial structure, two different data sets (sampled at two different times), representing concentrations of the same hydrochemical parameter, can be analyzed by cokriging in order to reduce the uncertainty of the estimation. In particular, if one of the two data sets is a subset of the other (that is, an undersampled set), cokriging allows us to study the spatial distribution of the hydrochemical parameter at that time, while also considering the statistical characteristics of the full data set established at a different time. This paper presents an application of cokriging by using temporal subsets to study the spatial distribution of nitrate concentration in the aquifer of the Lucca Plain, central Italy. Three data sets of nitrate concentration in groundwater were collected during three different periods in 1991. The first set was from 47 wells, but the second and the third are undersampled and represent 28 and 27 wells, respectively. Comparing the result of cokriging with ordinary kriging showed an improvement of the uncertainty in terms of reducing the estimation variance. The application of cokriging to the undersampled data sets reduced the uncertainty in estimating nitrate concentration and at the same time decreased the cost of the field sampling and laboratory analysis.Spatial and temporal behavior of hydrochemical parameters in groundwater can be studied using tools provided by geostatistics. The cross-variogram can be used to measure the spatial increments between observations at two given times as a function of distance (spatial structure). Taking into account the existence of such a spatial structure, two different data sets (sampled at two different times), representing concentrations of the same hydrochemical parameter, can be analyzed by cokriging in order to reduce the uncertainty of the estimation. In particular, if one of the two data sets is a subset of the other (that is, an undersampled set), cokriging allows us to study the spatial distribution of the hydrochemical parameter at that time, while also considering the statistical characteristics of the full data set established at a different time. This paper presents an application of cokriging by using temporal subsets to study the spatial distribution of nitrate concentration in the aquifer of the Lucca Plain, central Italy. Three data sets of nitrate concentration in groundwater were collected during three different periods in 1991. The first set was from 47 wells, but the second and the third are undersampled and represent 28 and 27 wells, respectively. Comparing the result of cokriging with ordinary kriging showed an improvement of the uncertainty in terms of reducing the estimation variance. The application of cokriging to the undersampled data sets reduced the uncertainty in estimating nitrate concentration and at the same time decreased the cost of the field sampling and laboratory analysis.

  16. [Studies on the brand traceability of milk powder based on NIR spectroscopy technology].

    PubMed

    Guan, Xiao; Gu, Fang-Qing; Liu, Jing; Yang, Yong-Jian

    2013-10-01

    Brand traceability of several different kinds of milk powder was studied by combining near infrared spectroscopy diffuse reflectance mode with soft independent modeling of class analogy (SIMCA) in the present paper. The near infrared spectrum of 138 samples, including 54 Guangming milk powder samples, 43 Netherlands samples, and 33 Nestle samples and 8 Yili samples, were collected. After pretreatment of full spectrum data variables in training set, principal component analysis was performed, and the contribution rate of the cumulative variance of the first three principal components was about 99.07%. Milk powder principal component regression model based on SIMCA was established, and used to classify the milk powder samples in prediction sets. The results showed that the recognition rate of Guangming milk powder, Netherlands milk powder and Nestle milk powder was 78%, 75% and 100%, the rejection rate was 100%, 87%, and 88%, respectively. Therefore, the near infrared spectroscopy combined with SIMCA model can classify milk powder with high accuracy, and is a promising identification method of milk powder variety.

  17. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  18. Impact assessment of on-site sanitation system on groundwater quality in alluvial settings: A case study from Lucknow city in North India.

    PubMed

    Jangam, Chandrakant; Ramya Sanam, S; Chaturvedi, M K; Padmakar, C; Pujari, Paras R; Labhasetwar, Pawan K

    2015-10-01

    The present case study has been undertaken to investigate the impact of on-site sanitation on groundwater quality in alluvial settings in Lucknow City in India. The groundwater samples have been collected in the areas of Lucknow City where the on-site sanitation systems have been implemented. The groundwater samples have been analyzed for the major physicochemical parameters and fecal coliform. The results of analysis reveal that none of the groundwater samples exceeded the Bureau of Indian Standards (BIS) limits for all the parameters. Fecal coliform was not found in majority of the samples including those samples which were very close to the septic tank. The study area has a thick alluvium cover as a top layer which acts as a natural barrier for groundwater contamination from the on-site sanitation system. The t test has been performed to assess the seasonal effect on groundwater quality. The statistical t test implies that there is a significant effect of season on groundwater quality in the study area.

  19. Evaluation of setting time and flow properties of self-synthesize alginate impressions

    NASA Astrophysics Data System (ADS)

    Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia

    2018-02-01

    Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.

  20. Automated Classification and Analysis of Non-metallic Inclusion Data Sets

    NASA Astrophysics Data System (ADS)

    Abdulsalam, Mohammad; Zhang, Tongsheng; Tan, Jia; Webler, Bryan A.

    2018-05-01

    The aim of this study is to utilize principal component analysis (PCA), clustering methods, and correlation analysis to condense and examine large, multivariate data sets produced from automated analysis of non-metallic inclusions. Non-metallic inclusions play a major role in defining the properties of steel and their examination has been greatly aided by automated analysis in scanning electron microscopes equipped with energy dispersive X-ray spectroscopy. The methods were applied to analyze inclusions on two sets of samples: two laboratory-scale samples and four industrial samples from a near-finished 4140 alloy steel components with varying machinability. The laboratory samples had well-defined inclusions chemistries, composed of MgO-Al2O3-CaO, spinel (MgO-Al2O3), and calcium aluminate inclusions. The industrial samples contained MnS inclusions as well as (Ca,Mn)S + calcium aluminate oxide inclusions. PCA could be used to reduce inclusion chemistry variables to a 2D plot, which revealed inclusion chemistry groupings in the samples. Clustering methods were used to automatically classify inclusion chemistry measurements into groups, i.e., no user-defined rules were required.

  1. Environmental monitoring and analysis of faecal contamination in an urban setting in the city of Bari (Apulia region, Italy): health and hygiene implications.

    PubMed

    Tarsitano, Elvira; Greco, Grazia; Decaro, Nicola; Nicassio, Francesco; Lucente, Maria Stella; Buonavoglia, Canio; Tempesta, Maria

    2010-11-01

    Few studies have been conducted in Italy to quantify the potential risk associated with dynamics and distribution of pathogens in urban settings. The aim of this study was to acquire data on the environmental faecal contamination in urban ecosystems, by assessing the presence of pathogens in public areas in the city of Bari (Apulia region, Italy). To determine the degree of environmental contamination, samples of dog faeces and bird guano were collected from different areas in the city of Bari (park green areas, playgrounds, public housing areas, parkways, and a school). A total of 152 canine faecal samples, in 54 pools, and two samples of pigeon guano from 66 monitored sites were examined. No samples were found in 12 areas spread over nine sites. Chlamydophila psittaci was detected in seven canine and two pigeon guano samples. Salmonella species were not found. On the other hand, four of 54 canine faecal samples were positive for reovirus. Thirteen canine faecal samples were positive for parasite eggs: 8/54 samples contained Toxocara canis and Toxascaris leonina eggs and 5/54 samples contained Ancylostoma caninum eggs. Our study showed that public areas are often contaminated by potentially zoonotic pathogens.

  2. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  3. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  4. The zoonotic potential of Giardia intestinalis assemblage E in rural settings.

    PubMed

    Abdel-Moein, Khaled A; Saeed, Hossam

    2016-08-01

    Giardiasis is a globally re-emerging protozoan disease with veterinary and public health implications. The current study was carried out to investigate the zoonotic potential of livestock-specific assemblage E in rural settings. For this purpose, a total of 40 microscopically positive Giardia stool samples from children with gastrointestinal complaints with or without diarrhea were enrolled in the study as well as fecal samples from 46 diarrheic cattle (18 dairy cows and 28 calves). Animal samples were examined by sedimentation method to identify Giardia spp., and then, all Giardia positive samples from human and animals were processed for molecular detection of livestock-specific assemblage E through amplification of assemblage-specific triosephosphate isomerase (tpi) gene using nested polymerase chain reaction (PCR). The results of the study revealed high unexpected occurrence of assemblage E among human samples (62.5 %), whereas the distribution among patients with diarrhea and those without was 42.1 and 81 %, respectively. On the other hand, the prevalence of Giardia spp. among diarrheic dairy cattle was (8.7 %), while only calves yielded positive results (14.3 %) and all bovine Giardia spp. were genetically classified as Giardia intestinalis assemblage E. Moreover, DNA sequencing of randomly selected one positive human sample and another bovine one revealed 100 and 99 % identity with assemblage E tpi gene sequences available at GenBank after BLAST analysis. In conclusion, the current study highlights the wide dissemination of livestock-specific assemblage E among humans in rural areas, and thus, zoonotic transmission cycle should not be discounted during the control of giardiasis in such settings.

  5. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  6. Characterization and electron-energy-loss spectroscopy on NiV and NiMo superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahmood, S.H.

    1986-01-01

    NiV superlattices with periods (A) ranging from 15 to 80 A, and NiMo superlattices with from 14 to 110 A were studied using X-ray Diffraction (XRD), Electron Diffraction (ED), Energy-Dispersive X-Ray (EDX) microanalysis, and Electron Energy Loss Spectroscopy (EELS). Both of these systems have sharp superlattice-to-amorphous (S-A) transitions at about empty set = 17A. Superlattices with empty set around the S-A boundary were found to have large local variations in the in-plane grain sizes. Except for a few isolated regions, the chemical composition of the samples were found to be uniform. In samples prepared at Argonne National Laboratory (ANL), mostmore » places studied with EELS showed changes in the EELS spectrum with decreasing empty set. An observed growth in a plasmon peak at approx. 10ev in both NiV and NiMo as empty set decreased down to 19 A is attributed to excitation of interface plasmons. Consistent with this attribution, the peak height shrank in the amorphous samples. The width of this peak is consistent with the theory. The sift in this peak down to 9 ev with decreasing empty set in NiMo is not understood.« less

  7. Effectiveness of Modular CBT for Child Anxiety in Elementary Schools

    ERIC Educational Resources Information Center

    Chiu, Angela W.; Langer, David A.; McLeod, Bryce D.; Har, Kim; Drahota, Amy; Galla, Brian M.; Jacobs, Jeffrey; Ifekwunigwe, Muriel; Wood, Jeffrey J.

    2013-01-01

    Most randomized controlled trials of cognitive-behavioral therapy (CBT) for children with anxiety disorders have evaluated treatment efficacy using recruited samples treated in research settings. Clinical trials in school settings are needed to determine if CBT can be effective when delivered in real world settings. This study evaluated a modular…

  8. Goal Setting and Self-Efficacy among Delinquent, At-Risk and Not At-Risk Adolescents

    ERIC Educational Resources Information Center

    Carroll, Annemaree; Gordon, Kellie; Haynes, Michele; Houghton, Stephen

    2013-01-01

    Setting clear achievable goals that enhance self-efficacy and reputational status directs the energies of adolescents into socially conforming or non-conforming activities. This present study investigates the characteristics and relationships between goal setting and self-efficacy among a matched sample of 88 delinquent (18% female), 97 at-risk…

  9. NHEXAS PHASE I ARIZONA STUDY--METALS IN AIR ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Air data set contains analytical results for measurements of up to 11 metals in 369 air samples over 175 households. Samples were taken by pumping standardized air volumes through filters at indoor and outdoor sites around each household being sampled. The primary...

  10. Sample handling for mass spectrometric proteomic investigations of human sera.

    PubMed

    West-Nielsen, Mikkel; Høgdall, Estrid V; Marchiori, Elena; Høgdall, Claus K; Schou, Christian; Heegaard, Niels H H

    2005-08-15

    Proteomic investigations of sera are potentially of value for diagnosis, prognosis, choice of therapy, and disease activity assessment by virtue of discovering new biomarkers and biomarker patterns. Much debate focuses on the biological relevance and the need for identification of such biomarkers while less effort has been invested in devising standard procedures for sample preparation and storage in relation to model building based on complex sets of mass spectrometric (MS) data. Thus, development of standardized methods for collection and storage of patient samples together with standards for transportation and handling of samples are needed. This requires knowledge about how sample processing affects MS-based proteome analyses and thereby how nonbiological biased classification errors are avoided. In this study, we characterize the effects of sample handling, including clotting conditions, storage temperature, storage time, and freeze/thaw cycles, on MS-based proteomics of human serum by using principal components analysis, support vector machine learning, and clustering methods based on genetic algorithms as class modeling and prediction methods. Using spiking to artificially create differentiable sample groups, this integrated approach yields data that--even when working with sample groups that differ more than may be expected in biological studies--clearly demonstrate the need for comparable sampling conditions for samples used for modeling and for the samples that are going into the test set group. Also, the study emphasizes the difference between class prediction and class comparison studies as well as the advantages and disadvantages of different modeling methods.

  11. High throughput image cytometry for detection of suspicious lesions in the oral cavity

    NASA Astrophysics Data System (ADS)

    MacAulay, Calum; Poh, Catherine F.; Guillaud, Martial; Michele Williams, Pamela; Laronde, Denise M.; Zhang, Lewei; Rosin, Miriam P.

    2012-08-01

    The successful management of oral cancer depends upon early detection, which relies heavily on the clinician's ability to discriminate sometimes subtle alterations of the infrequent premalignant lesions from the more common reactive and inflammatory conditions in the oral mucosa. Even among experienced oral specialists this can be challenging, particularly when using new wide field-of-view direct fluorescence visualization devices clinically introduced for the recognition of at-risk tissue. The objective of this study is to examine if quantitative cytometric analysis of oral brushing samples could facilitate the assessment of the risk of visually ambiguous lesions. About 369 cytological samples were collected and analyzed: (1) 148 samples from pathology-proven sites of SCC, carcinoma in situ or severe dysplasia; (2) 77 samples from sites with inflammation, infection, or trauma, and (3) 144 samples from normal sites. These were randomly separated into training and test sets. The best algorithm correctly recognized 92.5% of the normal samples, 89.4% of the abnormal samples, 86.2% of the confounders in the training set as well as 100% of the normal samples, and 94.4% of the abnormal samples in the test set. These data suggest that quantitative cytology could reduce by more than 85% the number of visually suspect lesions requiring further assessment by biopsy.

  12. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Detection of Bovine and Porcine Adenoviruses for Tracing the Source of Fecal Contamination

    PubMed Central

    Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina

    2004-01-01

    In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies. PMID:15006765

  14. Detection of bovine and porcine adenoviruses for tracing the source of fecal contamination.

    PubMed

    Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina

    2004-03-01

    In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies.

  15. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  16. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    PubMed

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  17. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY-PESTICIDES AND POLYCHLORINATED BIPHENYLS (PCBS) IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides and PCBs in Blood data set contains analytical results for measurements of up to 11 pesticides and up to 36 PCBs in 86 blood samples over 86 households. Each sample was collected as a venous sample from the primary respondent within each household. The samples co...

  18. Derivation and validation of different machine-learning models in mortality prediction of trauma in motorcycle riders: a cross-sectional retrospective study in southern Taiwan

    PubMed Central

    Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Rau, Cheng-Shyuan; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2018-01-01

    Objectives This study aimed to build and test the models of machine learning (ML) to predict the mortality of hospitalised motorcycle riders. Setting The study was conducted in a level-1 trauma centre in southern Taiwan. Participants Motorcycle riders who were hospitalised between January 2009 and December 2015 were classified into a training set (n=6306) and test set (n=946). Using the demographic information, injury characteristics and laboratory data of patients, logistic regression (LR), support vector machine (SVM) and decision tree (DT) analyses were performed to determine the mortality of individual motorcycle riders, under different conditions, using all samples or reduced samples, as well as all variables or selected features in the algorithm. Primary and secondary outcome measures The predictive performance of the model was evaluated based on accuracy, sensitivity, specificity and geometric mean, and an analysis of the area under the receiver operating characteristic curves of the two different models was carried out. Results In the training set, both LR and SVM had a significantly higher area under the receiver operating characteristic curve (AUC) than DT. No significant difference was observed in the AUC of LR and SVM, regardless of whether all samples or reduced samples and whether all variables or selected features were used. In the test set, the performance of the SVM model for all samples with selected features was better than that of all other models, with an accuracy of 98.73%, sensitivity of 86.96%, specificity of 99.02%, geometric mean of 92.79% and AUC of 0.9517, in mortality prediction. Conclusion ML can provide a feasible level of accuracy in predicting the mortality of motorcycle riders. Integration of the ML model, particularly the SVM algorithm in the trauma system, may help identify high-risk patients and, therefore, guide appropriate interventions by the clinical staff. PMID:29306885

  19. NHEXAS PHASE I MARYLAND STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains analytical results for measurements of up to 9 pesticides in 345 urine samples over 79 households. Each sample was collected from the primary respondent within each household during the study and represented the first morning ...

  20. NHEXAS PHASE I ARIZONA STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains analytical results for measurements of up to 4 pesticide metabolites in 176 urine samples over 176 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. ...

  1. GEE-based SNP set association test for continuous and discrete traits in family-based association studies.

    PubMed

    Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong

    2013-12-01

    Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.

  2. Preliminary Study on Appearance-Based Detection of Anatomical Point Landmarks in Body Trunk CT Images

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni

    Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.

  3. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  4. Learning to Reason from Samples

    ERIC Educational Resources Information Center

    Ben-Zvi, Dani; Bakker, Arthur; Makar, Katie

    2015-01-01

    The goal of this article is to introduce the topic of "learning to reason from samples," which is the focus of this special issue of "Educational Studies in Mathematics" on "statistical reasoning." Samples are data sets, taken from some wider universe (e.g., a population or a process) using a particular procedure…

  5. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN AIR ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Air data set contains analytical results for measurements of up to 11 metals in 344 air samples over 86 households. Samples were taken by pumping standardized air volumes through filters at indoor and outdoor sites around each household being sampled. The primary ...

  6. NHEXAS PHASE I ARIZONA STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dust data set contains analytical results for measurements of up to 11 metals in 562 dust samples over 388 households. Samples were taken by collecting dust samples from the indoor floor areas in the main room and in the bedroom of the primary resident. In additio...

  7. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains the analytical results for measurements of up to 8 pesticide metabolites in 86 samples over 86 households. Each sample was collected form the primary respondent within each household. The sample consists of the first morning ...

  8. NHEXAS PHASE I MARYLAND STUDY--METALS IN SOIL ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Soil data set contains analytical results for measurements of up to 4 metals in 277 soil samples over 75 households. Composite samples were obtained from up to 24 locations around the outside of the specific residence and combined into a single sample. The primary...

  9. NHEXAS PHASE I MARYLAND STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dust data set contains analytical results for measurements of up to 4 metals in 282 dust samples over 80 households. Samples were obtained by collecting dust samples from the indoor floor areas in the main activity room using a modified vacuum cleaner device that c...

  10. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANKS

    EPA Science Inventory

    The Metals in Blanks data set contains the analytical results of measurements of up to 11 metals in 115 blank samples from 58 households. Measurements were made in blank samples of indoor and outdoor air, drinking water, beverages, urine, and blood. Blank samples were used to a...

  11. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dust data set contains analytical results for measurements of up to 9 pesticides in 126 dust samples over 50 households. Samples were obtained by collecting dust samples from the indoor floor areas in the main activity room using a modified vacuum cleaner devic...

  12. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN SOIL ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Soil data set contains analytical results for measurements of up to 9 pesticides in 60 soil samples over 41 households. Composite samples were obtained from up to 24 locations around the outside of the specific residence and combined into a single sample. Only...

  13. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN BLANKS

    EPA Science Inventory

    The Pesticides in Blanks data set contains the analytical results of measurements of up to 20 pesticides in 70 blank samples from 46 households. Measurements were made in blank samples of indoor air, dust, soil, drinking water, food, beverages, and blood serum. Blank samples we...

  14. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR PARTICULATE MATTER IN BLANK SAMPLES

    EPA Science Inventory

    The Particulate Matter in Blank Samples data set contains the analytical results for measurements of two particle sizes in 12 samples. Filters were pre-weighed, loaded into impactors, kept unexposed in the laboratory, unloaded and post-weighed. Positive weight gains for laborat...

  15. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. A place meaning scale for tropical marine settings.

    PubMed

    Wynveen, Christopher J; Kyle, Gerard T

    2015-01-01

    Over the past 20 years, most of the worldwide hectares set aside for environmental protection have been added to marine protected areas. Moreover, these areas are under tremendous pressure from negative anthropogenic impacts. Given this growth and pressure, there is a need to increase the understanding of the connection between people and marine environments in order to better manage the resource. One construct that researchers have used to understand human-environment connections is place meanings. Place meanings reflect the value and significance of a setting to individuals. Most investigations of place meanings have been confined to terrestrial settings. Moreover, most studies have had small sample sizes or have used place attachment scales as a proxy to gage the meanings individuals ascribe to a setting. Hence, it has become necessary to develop a place meaning scale for use with large samples and for use by those who are concerned about the management of marine environments. Therefore, the purpose of this investigation was to develop a scale to measure the importance people associate with the meanings they ascribe to tropical marine settings and empirically test the scale using two independent samples; that is, Great Barrier Reef Marine Park and the Florida Keys National Marine Sanctuary stakeholders.

  17. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya

    PubMed Central

    Cohen, Ted; Zignol, Matteo; Nyakan, Edwin; Hedt-Gauthier, Bethany L.; Gardner, Adrian; Kamle, Lydia; Injera, Wilfred; Carter, E. Jane

    2016-01-01

    Objective To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB) in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS) methodology. Design The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance. Results This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains. Conclusion Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints. PMID:27167381

  18. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya.

    PubMed

    Jezmir, Julia; Cohen, Ted; Zignol, Matteo; Nyakan, Edwin; Hedt-Gauthier, Bethany L; Gardner, Adrian; Kamle, Lydia; Injera, Wilfred; Carter, E Jane

    2016-01-01

    To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB) in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS) methodology. The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance. This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains. Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints.

  19. A Place Meaning Scale for Tropical Marine Settings

    NASA Astrophysics Data System (ADS)

    Wynveen, Christopher J.; Kyle, Gerard T.

    2015-01-01

    Over the past 20 years, most of the worldwide hectares set aside for environmental protection have been added to marine protected areas. Moreover, these areas are under tremendous pressure from negative anthropogenic impacts. Given this growth and pressure, there is a need to increase the understanding of the connection between people and marine environments in order to better manage the resource. One construct that researchers have used to understand human-environment connections is place meanings. Place meanings reflect the value and significance of a setting to individuals. Most investigations of place meanings have been confined to terrestrial settings. Moreover, most studies have had small sample sizes or have used place attachment scales as a proxy to gage the meanings individuals ascribe to a setting. Hence, it has become necessary to develop a place meaning scale for use with large samples and for use by those who are concerned about the management of marine environments. Therefore, the purpose of this investigation was to develop a scale to measure the importance people associate with the meanings they ascribe to tropical marine settings and empirically test the scale using two independent samples; that is, Great Barrier Reef Marine Park and the Florida Keys National Marine Sanctuary stakeholders.

  20. Predicting Reading and Mathematics from Neural Activity for Feedback Learning

    ERIC Educational Resources Information Center

    Peters, Sabine; Van der Meulen, Mara; Zanolie, Kiki; Crone, Eveline A.

    2017-01-01

    Although many studies use feedback learning paradigms to study the process of learning in laboratory settings, little is known about their relevance for real-world learning settings such as school. In a large developmental sample (N = 228, 8-25 years), we investigated whether performance and neural activity during a feedback learning task…

  1. Predicting ambient aerosol thermal-optical reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-03-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  2. NHEXAS PHASE I ARIZONA STUDY--PESTICIDES IN DERMAL WIPES ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dermal Wipes data set contains analytical results for measurements of up to 3 pesticides in 177 dermal wipe samples over 177 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The Derma...

  3. NHEXAS PHASE I MARYLAND STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 3 metals in 376 urine samples over 80 households. Each sample was collected from the primary respondent within each household during the study and represented the first morning void of either Day ...

  4. Software engineering the mixed model for genome-wide association studies on large samples

    USDA-ARS?s Scientific Manuscript database

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  5. CTEPP NC DATA ANALYTICAL RESULTS ORGANIZED BY CHEMICAL AND MEDIA

    EPA Science Inventory

    This data set contains the field sample data by chemical and matrix. The data are organized at the sample, chemical level.

    The Children’s Total Exposure to Persistent Pesticides and Other Persistent Pollutant (CTEPP) study was one of the largest aggregate exposure studies of y...

  6. Analysis of Genetic Algorithm for Rule-Set Production (GARP) modeling approach for predicting distributions of fleas implicated as vectors of plague, Yersinia pestis, in California.

    PubMed

    Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E

    2006-01-01

    More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.

  7. The program structure does not reliably recover the correct population structure when sampling is uneven: subsampling and new estimators alleviate the problem.

    PubMed

    Puechmaille, Sebastien J

    2016-05-01

    Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved. © 2016 John Wiley & Sons Ltd.

  8. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  9. Breath-based biomarkers for tuberculosis

    NASA Astrophysics Data System (ADS)

    Kolk, Arend H. J.; van Berkel, Joep J. B. N.; Claassens, Mareli M.; Walters, Elisabeth; Kuijper, Sjoukje; Dallinga, Jan W.; van Schooten, Fredrik-Jan

    2012-06-01

    We investigated the potential of breath analysis by gas chromatography - mass spectrometry (GC-MS) to discriminate between samples collected prospectively from patients with suspected tuberculosis (TB). Samples were obtained in a TB endemic setting in South Africa where 28% of the culture proven TB patients had a Ziehl-Neelsen (ZN) negative sputum smear. A training set of breath samples from 50 sputum culture proven TB patients and 50 culture negative non-TB patients was analyzed by GC-MS. A classification model with 7 compounds resulted in a training set with a sensitivity of 72%, specificity of 86% and accuracy of 79% compared with culture. The classification model was validated with an independent set of breath samples from 21 TB and 50 non-TB patients. A sensitivity of 62%, specificity of 84% and accuracy of 77% was found. We conclude that the 7 volatile organic compounds (VOCs) that discriminate breath samples from TB and non-TB patients in our study population are probably host-response related VOCs and are not derived from the VOCs secreted by M. tuberculosis. It is concluded that at present GC-MS breath analysis is able to differentiate between TB and non-TB breath samples even among patients with a negative ZN sputum smear but a positive culture for M. tuberculosis. Further research is required to improve the sensitivity and specificity before this method can be used in routine laboratories.

  10. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  11. Comparison of mine waste assessment methods at the Rattler mine site, Virginia Canyon, Colorado

    USGS Publications Warehouse

    Hageman, Phil L.; Smith, Kathleen S.; Wildeman, Thomas R.; Ranville, James F.

    2005-01-01

    In a joint project, the mine waste-piles at the Rattler Mine near Idaho Springs, Colorado, were sampled and analyzed by scientists from the U.S. Geological Survey (USGS) and the Colorado School of Mines (CSM). Separate sample collection, sample leaching, and leachate analyses were performed by both groups and the results were compared. For the study, both groups used the USGS sampling procedure and the USGS Field Leach Test (FLT). The leachates generated from these tests were analyzed for a suite of elements using ICP-AES (CSM) and ICP-MS (USGS). Leachate geochemical fingerprints produced by the two groups for composites collected from the same mine waste showed good agreement. In another set of tests, CSM collected another set of Rattler mine waste composite samples using the USGS sampling procedure. This set of composite samples was leached using the Colorado Division of Minerals and Geology (CDMG) leach test, and a modified Toxicity Characteristic Leaching Procedure (TCLP) leach test. Leachate geochemical fingerprints produced using these tests showed a variation of more than a factor of two from the geochemical fingerprints produced using the USGS FLT leach test. We have concluded that the variation in the results is due to the different parameters of the leaching tests and not due to the sampling or analytical methods.

  12. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  13. Molecular identification of Cryptosporidium spp. in seagulls, pigeons, dogs, and cats in Thailand.

    PubMed

    Koompapong, Khuanchai; Mori, Hirotake; Thammasonthijarern, Nipa; Prasertbun, Rapeepun; Pintong, Ai-rada; Popruk, Supaluk; Rojekittikhun, Wichit; Chaisiri, Kittipong; Sukthana, Yaowalark; Mahittikorn, Aongart

    2014-01-01

    Zoonotic Cryptosporidium spp., particularly C. meleagridis, C. canis, and C. felis, are enteric protozoa responsible for major public health concerns around the world. To determine the spread of this parasite in Thailand, we conducted molecular identification of Cryptosporidium spp. from animal samples around the country, by collecting and investigating the feces of seagulls (Chroicocephalus brunnicephalus and Chroicocephalus ridibundus), domestic pigeons (Columba livia domestica), dogs, and cats. Seagull and pigeon samples were collected at the seaside and on the riverside to evaluate their potential for waterborne transmission. Ten pigeon samples were combined into one set, and a total of seven sets were collected. Seventy seagull samples were combined into one set, and a total of 13 sets were collected. In addition, 111 dog samples were collected from cattle farms, and 95 dog and 80 cat samples were collected from a temple. We identified C. meleagridis in pigeons, Cryptosporidium avian genotype III in seagulls, C. canis in dogs, and C. felis in cats. In the temple, the prevalence was 2.1% (2/95) for dogs and 2.5% (2/80) for cats. No Cryptosporidium was found in dog samples from cattle farms. These are the first findings of C. meleagridis in domestic pigeons, and Cryptosporidium avian genotype III in seagulls. Our study invites further molecular epidemiological investigations of Cryptosporidium in these animals and their environment to evaluate the public health risk in Thailand. K. Koompapong et al., published by EDP Sciences, 2014

  14. The EIPeptiDi tool: enhancing peptide discovery in ICAT-based LC MS/MS experiments.

    PubMed

    Cannataro, Mario; Cuda, Giovanni; Gaspari, Marco; Greco, Sergio; Tradigo, Giuseppe; Veltri, Pierangelo

    2007-07-15

    Isotope-coded affinity tags (ICAT) is a method for quantitative proteomics based on differential isotopic labeling, sample digestion and mass spectrometry (MS). The method allows the identification and relative quantification of proteins present in two samples and consists of the following phases. First, cysteine residues are either labeled using the ICAT Light or ICAT Heavy reagent (having identical chemical properties but different masses). Then, after whole sample digestion, the labeled peptides are captured selectively using the biotin tag contained in both ICAT reagents. Finally, the simplified peptide mixture is analyzed by nanoscale liquid chromatography-tandem mass spectrometry (LC-MS/MS). Nevertheless, the ICAT LC-MS/MS method still suffers from insufficient sample-to-sample reproducibility on peptide identification. In particular, the number and the type of peptides identified in different experiments can vary considerably and, thus, the statistical (comparative) analysis of sample sets is very challenging. Low information overlap at the peptide and, consequently, at the protein level, is very detrimental in situations where the number of samples to be analyzed is high. We designed a method for improving the data processing and peptide identification in sample sets subjected to ICAT labeling and LC-MS/MS analysis, based on cross validating MS/MS results. Such a method has been implemented in a tool, called EIPeptiDi, which boosts the ICAT data analysis software improving peptide identification throughout the input data set. Heavy/Light (H/L) pairs quantified but not identified by the MS/MS routine, are assigned to peptide sequences identified in other samples, by using similarity criteria based on chromatographic retention time and Heavy/Light mass attributes. EIPeptiDi significantly improves the number of identified peptides per sample, proving that the proposed method has a considerable impact on the protein identification process and, consequently, on the amount of potentially critical information in clinical studies. The EIPeptiDi tool is available at http://bioingegneria.unicz.it/~veltri/projects/eipeptidi/ with a demo data set. EIPeptiDi significantly increases the number of peptides identified and quantified in analyzed samples, thus reducing the number of unassigned H/L pairs and allowing a better comparative analysis of sample data sets.

  15. Multiple Category-Lot Quality Assurance Sampling: A New Classification System with Application to Schistosomiasis Control

    PubMed Central

    Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello

    2012-01-01

    Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333

  16. FT-midIR determination of fatty acid profiles, including trans fatty acids, in bakery products after focused microwave-assisted Soxhlet extraction.

    PubMed

    Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D

    2006-08-01

    A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.

  17. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish

    PubMed Central

    Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon

    2015-01-01

    Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086

  18. Improvement in the stability of serum samples stored in an automated refrigerated module.

    PubMed

    Parra-Robert, Marina; Rico-Santana, Naira; Alcaraz-Quiles, José; Sandalinas, Silvia; Fernández, Esther; Falcón, Isabel; Pérez-Riedweg, Margarita; Bedini, Josep Lluís

    2016-12-01

    In clinical laboratories it is necessary to know for how long the analytes are stable in the samples with specific storage conditions. Our laboratory has implemented the new Aptio Automation System (AAS) (Siemens Healthcare Diagnostics) where the analyzed samples are stored in a refrigerated storage module (RSM) after being sealed. The aim of the study was to evaluate the stability of serum samples with the AAS and comparing the results with a previous study using a conventional refrigerated system. Serum samples from a total of 50 patients were collected and for each of them 27 biochemical analytes were analyzed. The samples were divided in 5 sets of 10 samples. Each set was re-analyzed at one of the following times: 24, 48, 72, 96 and 120h. Stability was evaluated according to the Total Limit of Change (TLC) criteria, which combine both analytical and biologic variation. A total of 26 out of 27 analytes were stable at the end of the study according to TLC criteria. Lactate dehydrogenase was not stable at 48h observing a decrease in its concentration until the end of the study. In the previous study (conventional storage system) 9 biochemical analytes were not stable with an increase of their levels due to the evaporation process. The RSM connected to the AAS improves the stability of serum samples. This system avoids the evaporation process due to the sealing of samples and allows better control of the samples during their storage. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. The ETS Gender Study: How Females and Males Perform in Educational Settings.

    ERIC Educational Resources Information Center

    Cole, Nancy S.

    The Educational Testing Service (ETS) Gender Study is the result of 4 years of work by several researchers using data from more than 400 tests and other measures from more than 1,500 data sets involving millions of students. The study focuses on nationally representative samples that cut across grades (ages), academic subjects, and years in order…

  20. The Accuracy of Estimated Total Test Statistics. Final Report.

    ERIC Educational Resources Information Center

    Kleinke, David J.

    In a post-mortem study of item sampling, 1,050 examinees were divided into ten groups 50 times. Each time, their papers were scored on four different sets of item samples from a 150-item test of academic aptitude. These samples were selected using (a) unstratified random sampling and stratification on (b) content, (c) difficulty, and (d) both.…

  1. The observed clustering of damaging extratropical cyclones in Europe

    NASA Astrophysics Data System (ADS)

    Cusack, Stephen

    2016-04-01

    The clustering of severe European windstorms on annual timescales has substantial impacts on the (re-)insurance industry. Our knowledge of the risk is limited by large uncertainties in estimates of clustering from typical historical storm data sets covering the past few decades. Eight storm data sets are gathered for analysis in this study in order to reduce these uncertainties. Six of the data sets contain more than 100 years of severe storm information to reduce sampling errors, and observational errors are reduced by the diversity of information sources and analysis methods between storm data sets. All storm severity measures used in this study reflect damage, to suit (re-)insurance applications. The shortest storm data set of 42 years provides indications of stronger clustering with severity, particularly for regions off the main storm track in central Europe and France. However, clustering estimates have very large sampling and observational errors, exemplified by large changes in estimates in central Europe upon removal of one stormy season, 1989/1990. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm data sets show increased clustering between more severe storms from return periods (RPs) of 0.5 years to the longest measured RPs of about 20 years. Further, they contain signs of stronger clustering off the main storm track, and weaker clustering for smaller-sized areas, though these signals are more uncertain as they are drawn from smaller data samples. These new ultra-long storm data sets provide new information on clustering to improve our management of this risk.

  2. Scientific data interpolation with low dimensional manifold model

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  3. Perceived climate in physical activity settings.

    PubMed

    Gill, Diane L; Morrow, Ronald G; Collins, Karen E; Lucey, Allison B; Schultz, Allison M

    2010-01-01

    This study focused on the perceived climate for LGBT youth and other minority groups in physical activity settings. A large sample of undergraduates and a selected sample including student teachers/interns and a campus Pride group completed a school climate survey and rated the climate in three physical activity settings (physical education, organized sport, exercise). Overall, school climate survey results paralleled the results with national samples revealing high levels of homophobic remarks and low levels of intervention. Physical activity climate ratings were mid-range, but multivariate analysis of variation test (MANOVA) revealed clear differences with all settings rated more inclusive for racial/ethnic minorities and most exclusive for gays/lesbians and people with disabilities. The results are in line with national surveys and research suggesting sexual orientation and physical characteristics are often the basis for harassment and exclusion in sport and physical activity. The current results also indicate that future physical activity professionals recognize exclusion, suggesting they could benefit from programs that move beyond awareness to skills and strategies for creating more inclusive programs.

  4. Screening of pesticide residues in soil and water samples from agricultural settings

    PubMed Central

    Akogbéto, Martin C; Djouaka, Rousseau F; Kindé-Gazard, Dorothée A

    2006-01-01

    Background The role of agricultural practices in the selection of insecticide resistance in malaria vectors has so far been hypothesized without clear evidence. Many mosquito species, Anopheles gambiae in particular, lay their eggs in breeding sites located around agricultural settings. There is a probability that, as a result of farming activities, insecticide residues may be found in soil and water, where they exercise a selection pressure on the larval stage of various populations of mosquitoes. To confirm this hypothesis, a study was conducted in the Republic of Benin to assess the environmental hazards which can be generated from massive use of pesticides in agricultural settings. Methods Lacking an HPLC machine for direct quantification of insecticide residues in samples, this investigation was performed using indirect bioassays focussed on the study of factors inhibiting the normal growth of mosquito larvae in breeding sites. The speed of development was monitored as well as the yield of rearing An. gambiae larvae in breeding sites reconstituted with water and soil samples collected in agricultural areas known to be under pesticide pressure. Two strains of An. gambiae were used in this indirect bioassay: the pyrethroid-susceptible Kisumu strain and the resistant Ladji strain. The key approach in this methodology is based on comparison of the growth of larvae in test and in control breeding sites, the test samples having been collected from two vegetable farms. Results Results obtained clearly show the presence of inhibiting factors on test samples. A normal growth of larvae was observed in control samples. In breeding sites simulated by using a few grams of soil samples from the two vegetable farms under constant insecticide treatments (test samples), a poor hatching rate of Anopheles eggs coupled with a retarded growth of larvae and a low yield of adult mosquitoes from hatched eggs, was noticed. Conclusion Toxic factors inhibiting the hatching of anopheles eggs and the growth of larvae are probably pesticide residues from agricultural practices. Samples used during this indirect assay have been stored in the laboratory and will be analysed with HPLC techniques to confirm hypothesis of this study and to identify the various end products found in soil and water samples from agricultural settings under pesticide pressure. PMID:16563153

  5. Teaching Manual Signs to Adults With Mental Retardation Using Matching-to-Sample Procedures and Stimulus Equivalence

    PubMed Central

    Elias, Nassim Chamel; Goyos, Celso; Saunders, Muriel; Saunders, Richard

    2008-01-01

    The objective of this study was to teach manual signs through an automated matching-to-sample procedure and to test for the emergence of new conditional relations and imitative behaviors. Seven adults with mild to severe mental retardation participated. Four were also hearing impaired. Relations between manual signs (set A) and pictures (set B) were initially taught, followed by the training of corresponding printed words (set C) and pictures (set B). Further presentations of conditional discriminations tested for the emergence of AC, followed by tests for the emergence of imitative signing behavior (D) in the presence of either pictures (B) or printed words (C). Each stimulus set was comprised of 9 elements. The stimuli were still pictures, printed words, and dynamic presentations of manual signs. A pretest was conducted to determine which signs the participants could make pre-experimentally. Teaching was arranged in a multiple baseline design across 3 groups of 3 words each. The purpose of the present study was to determine whether participants would emit manual signs in expressive signs tests as a result of observation (video modeling) during match-to-sample training in the absence of explicit training. Five of the 7 subjects passed tests of emergence and emitted at least 50% of the signs. Two were hearing impaired with signing experience, and 3 were not hearing impaired and had no signing experience. Thus, observation of video recorded manual signs in a matching-to-sample training procedure was effective at establishing some signs by adults with mental retardation. PMID:22477400

  6. Ethnic Variations of Pathways Linking Socioeconomic Status, Parenting, and Preacademic Skills in a Nationally Representative Sample

    ERIC Educational Resources Information Center

    Iruka, Iheoma U.; Dotterer, Aryn M.; Pungello, Elizabeth P.

    2014-01-01

    Research Findings: Grounded in the investment model and informed by the integrative theory of the study of minority children, this study used the Early Childhood Longitudinal Study-Birth Cohort data set, a nationally representative sample of young children, to investigate whether the association between socioeconomic status (family income and…

  7. U.S. Food safety and Inspection Service testing for Salmonella in selected raw meat and poultry products in the United States, 1998 through 2003: an establishment-level analysis.

    PubMed

    Eblen, Denise R; Barlow, Kristina E; Naugle, Alecia Larew

    2006-11-01

    The U.S. Food Safety and Inspection Service (FSIS) pathogen reduction-hazard analysis critical control point systems final rule, published in 1996, established Salmonella performance standards for broiler chicken, cow and bull, market hog, and steer and heifer carcasses and for ground beef, chicken, and turkey meat. In 1998, the FSIS began testing to verify that establishments are meeting performance standards. Samples are collected in sets in which the number of samples is defined but varies according to product class. A sample set fails when the number of positive Salmonella samples exceeds the maximum number of positive samples allowed under the performance standard. Salmonella sample sets collected at 1,584 establishments from 1998 through 2003 were examined to identify factors associated with failure of one or more sets. Overall, 1,282 (80.9%) of establishments never had failed sets. In establishments that did experience set failure(s), generally the failed sets were collected early in the establishment testing history, with the exception of broiler establishments where failure(s) occurred both early and late in the course of testing. Small establishments were more likely to have experienced a set failure than were large or very small establishments, and broiler establishments were more likely to have failed than were ground beef, market hog, or steer-heifer establishments. Agency response to failed Salmonella sample sets in the form of in-depth verification reviews and related establishment-initiated corrective actions have likely contributed to declines in the number of establishments that failed sets. A focus on food safety measures in small establishments and broiler processing establishments should further reduce the number of sample sets that fail to meet the Salmonella performance standard.

  8. Validation of the Comprehensive ICF Core Set for Vocational Rehabilitation From the Perspective of Physical Therapists: International Delphi Survey.

    PubMed

    Kaech Moll, Veronika M; Escorpizo, Reuben; Portmann Bergamaschi, Ruth; Finger, Monika E

    2016-08-01

    The Comprehensive ICF Core Set for vocational rehabilitation (VR) is a list of essential categories on functioning based on the World Health Organization (WHO) International Classification of Functioning, Disability and Health (ICF), which describes a standard for interdisciplinary assessment, documentation, and communication in VR. The aim of this study was to examine the content validity of the Comprehensive ICF Core Set for VR from the perspective of physical therapists. A 3-round email survey was performed using the Delphi method. A convenience sample of international physical therapists working in VR with work experience of ≥2 years were asked to identify aspects they consider as relevant when evaluating or treating clients in VR. Responses were linked to the ICF categories and compared with the Comprehensive ICF Core Set for VR. Sixty-two physical therapists from all 6 WHO world regions responded with 3,917 statements that were subsequently linked to 338 ICF categories. Fifteen (17%) of the 90 categories in the Comprehensive ICF Core Set for VR were confirmed by the physical therapists in the sample. Twenty-two additional ICF categories were identified that were not included in the Comprehensive ICF Core Set for VR. Vocational rehabilitation in physical therapy is not well defined in every country and might have resulted in the small sample size. Therefore, the results cannot be generalized to all physical therapists practicing in VR. The content validity of the ICF Core Set for VR is insufficient from solely a physical therapist perspective. The results of this study could be used to define a physical therapy-specific set of ICF categories to develop and guide physical therapist clinical practice in VR. © 2016 American Physical Therapy Association.

  9. Expression signature as a biomarker for prenatal diagnosis of trisomy 21.

    PubMed

    Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut

    2013-01-01

    A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.

  10. Determination of fat and total protein content in milk using conventional digital imaging.

    PubMed

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-04-01

    The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow's milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R(2)=0.974 for protein and R(2)=0.973 for fat content). However when analysed the sets jointly with the obtained results were significantly worse (best models with cross-validated R(2)=0.890 for fat content and R(2)=0.720 for protein content). The results have been compared with previously published Vis/SW-NIR spectroscopic study of similar samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY

    PubMed Central

    Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping

    2013-01-01

    Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869

  12. Uncovering the hidden risk architecture of the schizophrenias: confirmation in three independent genome-wide association studies.

    PubMed

    Arnedo, Javier; Svrakic, Dragan M; Del Val, Coral; Romero-Zaliz, Rocío; Hernández-Cuervo, Helena; Fanous, Ayman H; Pato, Michele T; Pato, Carlos N; de Erausquin, Gabriel A; Cloninger, C Robert; Zwir, Igor

    2015-02-01

    The authors sought to demonstrate that schizophrenia is a heterogeneous group of heritable disorders caused by different genotypic networks that cause distinct clinical syndromes. In a large genome-wide association study of cases with schizophrenia and controls, the authors first identified sets of interacting single-nucleotide polymorphisms (SNPs) that cluster within particular individuals (SNP sets) regardless of clinical status. Second, they examined the risk of schizophrenia for each SNP set and tested replicability in two independent samples. Third, they identified genotypic networks composed of SNP sets sharing SNPs or subjects. Fourth, they identified sets of distinct clinical features that cluster in particular cases (phenotypic sets or clinical syndromes) without regard for their genetic background. Fifth, they tested whether SNP sets were associated with distinct phenotypic sets in a replicable manner across the three studies. The authors identified 42 SNP sets associated with a 70% or greater risk of schizophrenia, and confirmed 34 (81%) or more with similar high risk of schizophrenia in two independent samples. Seventeen networks of SNP sets did not share any SNP or subject. These disjoint genotypic networks were associated with distinct gene products and clinical syndromes (i.e., the schizophrenias) varying in symptoms and severity. Associations between genotypic networks and clinical syndromes were complex, showing multifinality and equifinality. The interactive networks explained the risk of schizophrenia more than the average effects of all SNPs (24%). Schizophrenia is a group of heritable disorders caused by a moderate number of separate genotypic networks associated with several distinct clinical syndromes.

  13. Impact of a new sampling buffer on faecal haemoglobin stability in a colorectal cancer screening programme by the faecal immunochemical test.

    PubMed

    Grazzini, Grazia; Ventura, Leonardo; Rubeca, Tiziana; Rapi, Stefano; Cellai, Filippo; Di Dia, Pietro P; Mallardi, Beatrice; Mantellini, Paola; Zappa, Marco; Castiglione, Guido

    2017-07-01

    Haemoglobin (Hb) stability in faecal samples is an important issue in colorectal cancer screening by the faecal immunochemical test (FIT) for Hb. This study evaluated the performance of the FIT-Hb (OC-Sensor Eiken) used in the Florence screening programme by comparing two different formulations of the buffer, both in an analytical and in a clinical setting. In the laboratory simulation, six faecal pools (three in each buffer type) were stored at different temperatures and analysed eight times in 10 replicates over 21 days. In the clinical setting, 7695 screenees returned two samples, using both the old and the new specimen collection device (SCD). In the laboratory simulation, 5 days from sample preparation with the buffer of the old SCD, the Hb concentration decreased by 40% at room temperature (25°C, range 22-28°C) and up to 60% at outside temperature (29°C, range 16-39°C), whereas with the new one, Hb concentration decreased by 10%. In the clinical setting, a higher mean Hb concentration with the new SCD compared with the old one was found (6.3 vs. 5.0 µg Hb/g faeces, respectively, P<0.001); no statistically significant difference was found in the probability of having a positive result in the two SCDs. Better Hb stability was observed with the new buffer under laboratory conditions, but no difference was found in the clinical performance. In our study, only marginal advantages arise from the new buffer. Improvements in sample stability represent a significant target in the screening setting.

  14. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN DERMAL ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dermal Wipes data set contains analytical results for measurements of up to 11 metals in 86 dermal wipe samples over 86 households. Each sample was collected from the primary respondent within each household. The sampling period occurred on the first day of the fi...

  15. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Dust data set contains analytical results for measurements of up to 11 metals in 182 dust samples over 91 households. Samples were taken by collecting dust samples from the indoor floor areas in the main room and in the bedroom of the primary resident. In addition...

  16. NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKES

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 49 field control samples (spikes). Measurements were made for up to 11 metals in samples of water, blood, and urine. Field controls were used to assess recovery of target analytes from a sample media during s...

  17. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 7 metals in 86 urine samples over 86 households. Each sample was collected from the primary respondent within each household. The sample consists of the first morning void following the 24-hour d...

  18. NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spikes data set contains the analytical results of measurements of up to 4 metals in 71 control samples (spikes) from 47 households. Measurements were made in samples of indoor and outdoor air, blood, and urine. Controls were used to assess recovery of target anal...

  19. Blood Sampling and Preparation Procedures for Proteomic Biomarker Studies of Psychiatric Disorders.

    PubMed

    Guest, Paul C; Rahmoune, Hassan

    2017-01-01

    A major challenge in proteomic biomarker discovery and validation for psychiatric diseases is the inherent biological complexity underlying these conditions. There are also many technical issues which hinder this process such as the lack of standardization in sampling, processing and storage of bio-samples in preclinical and clinical settings. This chapter describes a reproducible procedure for sampling blood serum and plasma that is specifically designed for maximizing data quality output in two-dimensional gel electrophoresis, multiplex immunoassay and mass spectrometry profiling studies.

  20. Rapid and visual detection of Leptospira in urine by LigB-LAMP assay with pre-addition of dye.

    PubMed

    Ali, Syed Atif; Kaur, Gurpreet; Boby, Nongthombam; Sabarinath, T; Solanki, Khushal; Pal, Dheeraj; Chaudhuri, Pallab

    2017-12-01

    Leptospirosis is considered to be the most widespread zoonotic disease caused by pathogenic species of Leptospira. The present study reports a novel set of primers targeting LigB gene for visual detection of pathogenic Leptospira in urine samples through Loop-mediated isothermal amplification (LAMP). The results were recorded by using Hydroxyl napthol blue (HNB), SYBR GREEN I and calcein. Analytical sensitivity of LAMP was as few as 10 leptospiral organisms in spiked urine samples from cattle and dog. LigB gene based LAMP, termed as LigB-LAMP, was found 10 times more sensitive than conventional PCR. The diagnostic specificity of LAMP was 100% when compared to SYBR green qPCR for detection of Leptospira in urine samples. Though qPCR was found more sensitive, the rapidity and simplicity in setting LAMP test followed by visual detection of Leptospira infection in clinical samples makes LigB-LAMP an alternative and favourable diagnostic tool in resource poor setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Assessment of left ventricular function and mass by MR imaging: a stereological study based on the systematic slice sampling procedure.

    PubMed

    Mazonakis, Michalis; Sahin, Bunyamin; Pagonidis, Konstantin; Damilakis, John

    2011-06-01

    The aim of this study was to combine the stereological technique with magnetic resonance (MR) imaging data for the volumetric and functional analysis of the left ventricle (LV). Cardiac MR examinations were performed in 13 consecutive subjects with known or suspected coronary artery disease. The end-diastolic volume (EDV), end-systolic volume, ejection fraction (EF), and mass were estimated by stereology using the entire slice set depicting LV and systematic sampling intensities of 1/2 and 1/3 that provided samples with every second and third slice, respectively. The repeatability of stereology was evaluated. Stereological assessments were compared with the reference values derived by manually tracing the endocardial and epicardial contours on MR images. Stereological EDV and EF estimations obtained by the 1/3 systematic sampling scheme were significantly different from those by manual delineation (P < .05). No difference was observed between the reference values and the LV parameters estimated by the entire slice set or a sampling intensity of 1/2 (P > .05). For these stereological approaches, a high correlation (r(2) = 0.80-0.93) and clinically acceptable limits of agreement were found with the reference method. Stereological estimations obtained by both sample sizes presented comparable coefficient of variation values of 2.9-5.8%. The mean time for stereological measurements on the entire slice set was 3.4 ± 0.6 minutes and it was reduced to 2.5 ± 0.5 minutes with the 1/2 systematic sampling scheme. Stereological analysis on systematic samples of MR slices generated by the 1/2 sampling intensity provided efficient and quick assessment of LV volumes, function, and mass. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  2. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  3. Nonlinear interferometric vibrational imaging

    NASA Technical Reports Server (NTRS)

    Boppart, Stephen A. (Inventor); Marks, Daniel L. (Inventor)

    2009-01-01

    A method of examining a sample, which includes: exposing a reference to a first set of electromagnetic radiation, to form a second set of electromagnetic radiation scattered from the reference; exposing a sample to a third set of electromagnetic radiation to form a fourth set of electromagnetic radiation scattered from the sample; and interfering the second set of electromagnetic radiation and the fourth set of electromagnetic radiation. The first set and the third set of electromagnetic radiation are generated from a source; at least a portion of the second set of electromagnetic radiation is of a frequency different from that of the first set of electromagnetic radiation; and at least a portion of the fourth set of electromagnetic radiation is of a frequency different from that of the third set of electromagnetic radiation.

  4. Differences in clinical characteristics between patients assessed for NHS specialist psychotherapy and primary care counselling.

    PubMed

    Chiesa, Marco; Fonagy, Peter; Bateman, Anthony W

    2007-12-01

    Although several studies have described patient populations in primary care counselling settings and NHS (National Health Service) specialist psychotherapy settings, there is a paucity of studies specifically comparing differences in clinical characteristics between the two groups of patients. The aim of this study is to ascertain if specialist psychotherapy referrals represent a more challenging client group than primary care counselling patients. We compare the socio-demographic features and severity of presentation in the symptomatic, interpersonal problems and global adjustment dimensions of a sample of patients (N=384) assessed by a primary care counselling service located in North London and a sample of patients (N=853) assessed in eight NHS psychotherapy centres located within urban settings in England. Both the groups completed the Brief Symptom Inventory, the Inventory of Interpersonal Problems and Clinical Outcomes in Routine Evaluation Outcome Measure. Patients referred for specialist psychotherapy services were more dysfunctional than those referred for primary care counselling. The linear function constructed to discriminate the groups showed that a combination of more psychotic symptoms, social inhibitions and higher risk of self-harm effectively identified those referred to psychotherapy services, while patients exhibiting greater levels of somatic and anxiety symptoms and non-assertiveness were more likely to be seen in primary care settings. However, similarities between the two samples were also marked, as shown by the overlap in the distribution of clinical outcomes in routine evaluation clinical scores in the two samples. The findings are discussed in terms of their implications for policy and service delivery of these two types of psychological therapy services.

  5. Characterization of plastic blends made from mixed plastics waste of different sources.

    PubMed

    Turku, Irina; Kärki, Timo; Rinne, Kimmo; Puurtinen, Ari

    2017-02-01

    This paper studies the recyclability of construction and household plastic waste collected from local landfills. Samples were processed from mixed plastic waste by injection moulding. In addition, blends of pure plastics, polypropylene and polyethylene were processed as a reference set. Reference samples with known plastic ratio were used as the calibration set for quantitative analysis of plastic fractions in recycled blends. The samples were tested for the tensile properties; scanning electron microscope-energy-dispersive X-ray spectroscopy was used for elemental analysis of the blend surfaces and Fourier transform infrared (FTIR) analysis was used for the quantification of plastics contents.

  6. Does a Corresponding Set of Variables for Explaining Voluntary Organizational Turnover Transfer to Explaining Voluntary Occupational Turnover?

    ERIC Educational Resources Information Center

    Blau, Gary

    2007-01-01

    This study proposed and tested corresponding sets of variables for explaining voluntary organizational versus occupational turnover for a sample of medical technologists. This study is believed to be the first test of the Rhodes and Doering (1983) occupational change model using occupational turnover data. Results showed that corresponding job…

  7. Effects of Goal-Setting Skills on Students'academic Performance in English Language in Enugu Nigeria

    ERIC Educational Resources Information Center

    Abe, Iyabo Idowu; Ilogu, Guy Chibuzoh; Madueke, Ify Louisa

    2014-01-01

    The study investigated the effectiveness of goal-setting skills among Senior Secondary II students' academic performance in English language in Enugu Metropolis, Enugu state, Nigeria. Quasi-experimental pre-test, post-test control group design was adopted for the study. The initial sample was 147 participants (male and female) Senior Secondary…

  8. Factors Affecting Adult Student Dropout Rates in the Korean Cyber-University Degree Programs

    ERIC Educational Resources Information Center

    Choi, Hee Jun; Kim, Byoung Uk

    2018-01-01

    Few empirical studies of adult distance learners' decisions to drop out of degree programs have used large enough sample sizes to generalize the findings or data sets drawn from multiple online programs that address various subjects. Accordingly, in this study, we used a large administrative data set drawn from multiple online degree programs to…

  9. Secondary School Teachers' Pedagogical Content Knowledge of Some Common Student Errors and Misconceptions in Sets

    ERIC Educational Resources Information Center

    Kolitsoe Moru, Eunice; Qhobela, Makomosela

    2013-01-01

    The study investigated teachers' pedagogical content knowledge of common students' errors and misconceptions in sets. Five mathematics teachers from one Lesotho secondary school were the sample of the study. Questionnaires and interviews were used for data collection. The results show that teachers were able to identify the following students'…

  10. Shallow ground-water quality beneath a major urban center: Denver, Colorado, USA

    USGS Publications Warehouse

    Bruce, B.W.; McMahon, P.B.

    1996-01-01

    A survey of the chemical quality of ground water in the unconsolidated alluvial aquifer beneath a major urban center (Denver, Colorado, USA) was performed in 1993 with the objective of characterizing the quality of shallow ground-water in the urban area and relating water quality to land use. Thirty randomly selected alluvial wells were each sampled once for a broad range of dissolved constituents. The urban land use at each well site was sub- classified into one of three land-use settings: residential, commercial, and industrial. Shallow ground-water quality was highly variable in the urban area and the variability could be related to these land-use setting classifications. Sulfate (SO4) was the predominant anion in most samples from the residential and commercial land-use settings, whereas bicarbonate (HCO3) was the predominant anion in samples from the industrial land-use setting, indicating a possible shift in redox conditions associated with land use. Only three of 30 samples had nitrate concentrations that exceeded the US national drinking-water standard of 10 mg l-1 as nitrogen, indicating that nitrate contamination of shallow ground water may not be a serious problem in this urban area. However, the highest median nitrate concentration (4.2 mg l-1) was in samples from the residential setting, where fertilizer application is assumed to be most intense. Twenty-seven of 30 samples had detectable pesticides and nine of 82 analyzed pesticide compounds were detected at low concentrations, indicating that pesticides are widely distributed in shallow ground water in this urban area. Although the highest median total pesticide concentration (0.17 ??g l-1) was in the commercial setting, the herbicides prometon and atrazine were found in each land-use setting. Similarly, 25 of 29 samples analyzed had detectable volatile organic compounds (VOCs) indicating these compounds are also widely distributed in this urban area. The total VOC concentrations in sampled wells ranged from nondetectable to 23 442 ??g l-1. Widespread detections and occasionally high concentrations point to VOCs as the major anthropogenic ground-water impact in this urban environment. Generally, the highest VOC concentrations occurred in samples from the industrial setting. The most frequently detected VOC was the gasoline additive methyl tertbutyl ether (MTBE, in 23 of 29 wells). Results from this study indicate that the quality of shallow ground water in major urban areas can be related to land-use settings. Moreover, some VOCs and pesticides may be widely distributed at low concentrations in shallow ground water throughout major urban areas. As a result, the differentiation between point and non-point sources for these compounds in urban areas may be difficult.

  11. The biobank of the Norwegian mother and child cohort Study: A resource for the next 100 years

    PubMed Central

    Rønningen, Kjersti S.; Paltiel, Liv; Meltzer, Helle M.; Nordhagen, Rannveig; Lie, Kari K.; Hovengen, Ragnhild; Haugen, Margaretha; Nystad, Wenche; Magnus, Per; Hoppin, Jane A.

    2007-01-01

    Introduction Long-term storage of biological materials is a critical component of any epidemiological study. In designing specimen repositories, efforts need to balance future needs for samples with logistical constraints necessary to process and store samples in a timely fashion. Objectives In the Norwegian Mother and Child Cohort Study (MoBa), the Biobank was charged with long-term storage of more than 380,000 biological samples from pregnant women, their partners and their children for up to 100 years. Methods Biological specimens include whole blood, plasma, DNA and urine; samples are collected at 50 hospitals in Norway. All samples are sent via ordinary mail to the Biobank in Oslo where the samples are registered, aliquoted and DNA extracted. DNA is stored at −20 °C while whole blood, urine and plasma are stored at − 80 °C. Results As of July 2006, over 227,000 sample sets have been collected, processed and stored at the Biobank. Currently 250–300 sets are received daily. An important part of the Biobank is the quality control program. Conclusion With the unique combination of biological specimens and questionnaire data, the MoBa Study will constitute a resource for many future investigations of the separate and combined effects of genetic, environmental factors on pregnancy outcome and on human morbidity, mortality and health in general. PMID:17031521

  12. Nondestructive estimation of Pinus taeda L. wood properties for samples from a wide range of sites in Georgia

    Treesearch

    P.D. Jones; L.R. Schimleck; G.F. Peter; R.F. Daniels; A. Clark

    2005-01-01

    Preliminary studies based on small sample sets show that near infrared (NIR) spectroscopy has the potential for rapidly estimating many important wood properties. However, if NIR is to be used operationally, then calibrations using several hundred samples from a wide variety of growing conditions need to be developed and their performance tested on samples from new...

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR COLLECTION OF PERSONAL AIR SAMPLES FOR ANALYSIS OF PESTICIDES OR METALS (UA-F-14.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedure for sampling personal air for metals and pesticides during a predetermined time period. The SOP includes the set up of the samplers for collection of either a metals sample or a pesticides sample, the calibration and initial c...

  14. An investigative comparison of purging and non-purging groundwater sampling methods in Karoo aquifer monitoring wells

    NASA Astrophysics Data System (ADS)

    Gomo, M.; Vermeulen, D.

    2015-03-01

    An investigation was conducted to statistically compare the influence of non-purging and purging groundwater sampling methods on analysed inorganic chemistry parameters and calculated saturation indices. Groundwater samples were collected from 15 monitoring wells drilled in Karoo aquifers before and after purging for the comparative study. For the non-purging method, samples were collected from groundwater flow zones located in the wells using electrical conductivity (EC) profiling. The two data sets of non-purged and purged groundwater samples were analysed for inorganic chemistry parameters at the Institute of Groundwater Studies (IGS) laboratory of the Free University in South Africa. Saturation indices for mineral phases that were found in the data base of PHREEQC hydrogeochemical model were calculated for each data set. Four one-way ANOVA tests were conducted using Microsoft excel 2007 to investigate if there is any statistically significant difference between: (1) all inorganic chemistry parameters measured in the non-purged and purged groundwater samples per each specific well, (2) all mineral saturation indices calculated for the non-purged and purged groundwater samples per each specific well, (3) individual inorganic chemistry parameters measured in the non-purged and purged groundwater samples across all wells and (4) Individual mineral saturation indices calculated for non-purged and purged groundwater samples across all wells. For all the ANOVA tests conducted, the calculated alpha values (p) are greater than 0.05 (significance level) and test statistic (F) is less than the critical value (Fcrit) (F < Fcrit). The results imply that there was no statistically significant difference between the two data sets. With a 95% confidence, it was therefore concluded that the variance between groups was rather due to random chance and not to the influence of the sampling methods (tested factor). It is therefore be possible that in some hydrogeologic conditions, non-purged groundwater samples might be just as representative as the purged ones. The findings of this study can provide an important platform for future evidence oriented research investigations to establish the necessity of purging prior to groundwater sampling in different aquifer systems.

  15. MicroRNAs for Detection of Pancreatic Neoplasia

    PubMed Central

    Vila-Navarro, Elena; Vila-Casadesús, Maria; Moreira, Leticia; Duran-Sanchon, Saray; Sinha, Rupal; Ginés, Àngels; Fernández-Esparrach, Glòria; Miquel, Rosa; Cuatrecasas, Miriam; Castells, Antoni; Lozano, Juan José; Gironella, Meritxell

    2017-01-01

    Objective: The aim of our study was to analyze the miRNome of pancreatic ductal adenocarcinoma (PDAC) and its preneoplastic lesion intraductal papillary mucinous neoplasm (IPMN), to find new microRNA (miRNA)-based biomarkers for early detection of pancreatic neoplasia. Objective: Effective early detection methods for PDAC are needed. miRNAs are good biomarker candidates. Methods: Pancreatic tissues (n = 165) were obtained from patients with PDAC, IPMN, or from control individuals (C), from Hospital Clínic of Barcelona. Biomarker discovery was done using next-generation sequencing in a discovery set of 18 surgical samples (11 PDAC, 4 IPMN, 3 C). MiRNA validation was carried out by quantitative reverse transcriptase PCR in 2 different set of samples. Set 1—52 surgical samples (24 PDAC, 7 IPMN, 6 chronic pancreatitis, 15 C), and set 2—95 endoscopic ultrasound-guided fine-needle aspirations (60 PDAC, 9 IPMN, 26 C). Results: In all, 607 and 396 miRNAs were significantly deregulated in PDAC and IPMN versus C. Of them, 40 miRNAs commonly overexpressed in both PDAC and IPMN were selected for further validation. Among them, significant up-regulation of 31 and 30 miRNAs was confirmed by quantitative reverse transcriptase PCR in samples from set 1 and set 2, respectively. Conclusions: miRNome analysis shows that PDAC and IPMN have differential miRNA profiles with respect to C, with a large number of deregulated miRNAs shared by both neoplastic lesions. Indeed, we have identified and validated 30 miRNAs whose expression is significantly increased in PDAC and IPMN lesions. The feasibility of detecting these miRNAs in endoscopic ultrasound-guided fine-needle aspiration samples makes them good biomarker candidates for early detection of pancreatic cancer. PMID:27232245

  16. Implementation of Web- Based Distance Education in Nursing Education in Turkey: A Sample Lesson in Patient Education

    ERIC Educational Resources Information Center

    Senyuva, Emine; Tasocak, Gülsün

    2014-01-01

    The research was carried out in 2005-2006 as a descriptive and methodological study. It aimed to obtain students' feedback and to serve as a source for future relevant studies. The setting of the study was Istanbul University Florence Nightingale Nursing College and at Istanbul University Bakirköy Health College. The sample of the study included…

  17. Limits of diagnostic accuracy of anti-hepatitis C virus antibodies detection by ELISA and immunoblot assay.

    PubMed

    Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G

    2002-07-01

    When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.

  18. U.S. Food Safety and Inspection Service testing for Salmonella in selected raw meat and poultry products in the United States, 1998 through 2003: analysis of set results.

    PubMed

    Naugle, Alecia Larew; Barlow, Kristina E; Eblen, Denise R; Teter, Vanessa; Umholtz, Robert

    2006-11-01

    The U.S. Food Safety and Inspection Service (FSIS) tests sets of samples of selected raw meat and poultry products for Salmonella to ensure that federally inspected establishments meet performance standards defined in the pathogen reduction-hazard analysis and critical control point system (PR-HACCP) final rule. In the present report, sample set results are described and associations between set failure and set and establishment characteristics are identified for 4,607 sample sets collected from 1998 through 2003. Sample sets were obtained from seven product classes: broiler chicken carcasses (n = 1,010), cow and bull carcasses (n = 240), market hog carcasses (n = 560), steer and heifer carcasses (n = 123), ground beef (n = 2,527), ground chicken (n = 31), and ground turkey (n = 116). Of these 4,607 sample sets, 92% (4,255) were collected as part of random testing efforts (A sets), and 93% (4,166) passed. However, the percentage of positive samples relative to the maximum number of positive results allowable in a set increased over time for broilers but decreased or stayed the same for the other product classes. Three factors associated with set failure were identified: establishment size, product class, and year. Set failures were more likely early in the testing program (relative to 2003). Small and very small establishments were more likely to fail than large ones. Set failure was less likely in ground beef than in other product classes. Despite an overall decline in set failures through 2003, these results highlight the need for continued vigilance to reduce Salmonella contamination in broiler chicken and continued implementation of programs designed to assist small and very small establishments with PR-HACCP compliance issues.

  19. Reduced contamination of pig carcasses using an alternative pluck set removal procedure during slaughter.

    PubMed

    Biasino, W; De Zutter, L; Woollard, J; Mattheus, W; Bertrand, S; Uyttendaele, M; Van Damme, I

    2018-05-26

    This study compared the current pig slaughter procedure where the pluck set is completely removed with a procedure where the pluck set is partially removed, leaving the highly contaminated oral cavity, tonsils and tongue untouched. The effect on carcass contamination was investigated by enumerating hygiene indicator bacteria (total aerobic count, Enterobacteriaceae and E. coli) and cefotaxime-resistant E. coli (CREC) as well as assessing Salmonella and Yersinia enterocolitica presence on the sternum, elbow and throat of pig carcasses. Using the alternative pluck set removal, significantly lower mean numbers of hygiene indicator bacteria on throat samples and E. coli on elbow samples were found. Less pig carcasses were highly contaminated and a lower presence and level of CREC was observed. No difference in Salmonella or Yersinia enterocolitica presence was seen. The data in this study can help to assess the effect of this alternative procedure on the safety of pork and subsequently public health. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Studies of the physical, yield and failure behavior of aliphatic polyketones

    NASA Astrophysics Data System (ADS)

    Karttunen, Nicole Renee

    This thesis describes an investigation into the multiaxial yield and failure behavior of an aliphatic polyketone terpolymer. The behavior is studied as a function of: stress state, strain rate, temperature, and sample processing conditions. Results of this work include: elucidation of the behavior of a recently commercialized polymer, increased understanding of the effects listed above, insight into the effects of processing conditions on the morphology of the polyketone, and a description of yield strength of this material as a function of stress state, temperature, and strain rate. The first portion of work focuses on the behavior of a set of samples that are extruded under "common" processing conditions. Following this reference set of tests, the effect of testing this material at different temperatures is studied. A total of four different temperatures are examined. In addition, the effect of altering strain rate is examined. Testing is performed under pseudo-strain rate control at constant nominal octahedral shear strain rate for each failure envelope. A total of three different rates are studied. An extension of the first portion of work involves modeling the yield envelope. This is done by combining two approaches: continuum level and molecular level. The use of both methods allows the description of the yield envelope as a function of stress state, strain rate and temperature. The second portion of work involves the effects of processing conditions. For this work, additional samples are extruded with different shear and thermal histories than the "standard" material. One set of samples is processed with shear rates higher and lower than the standard. A second set is processed at higher and lower cooling rates than the standard. In order to understand the structural cause for changes in behavior with processing conditions, morphological characterization is performed on these samples. In particular, the effect on spherulitic structure is important. Residual stresses are also determined to be important to the behavior of the samples. Finally, an investigation into the crystalline structure of a family of aliphatic polyketones is performed. The effects of side group concentration and size are described.

  1. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.

  2. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  3. Estimation of wood density and chemical composition by means of diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) spectroscopy.

    PubMed

    Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek

    2006-01-11

    Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.

  4. Results and analysis of saltstone cores taken from saltstone disposal unit cell 2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M. M.; Hill, K. A.

    2016-03-01

    As part of an ongoing Performance Assessment (PA) Maintenance Plan, Savannah River Remediation (SRR) has developed a sampling and analyses strategy to facilitate the comparison of field-emplaced samples (i.e., saltstone placed and cured in a Saltstone Disposal Unit (SDU)) with samples prepared and cured in the laboratory. The primary objectives of the Sampling and Analyses Plan (SAP) are; (1) to demonstrate a correlation between the measured properties of laboratory-prepared, simulant samples (termed Sample Set 3), and the field-emplaced saltstone samples (termed Sample Set 9), and (2) to validate property values assumed for the Saltstone Disposal Facility (SDF) PA modeling. Themore » analysis and property data for Sample Set 9 (i.e. six core samples extracted from SDU Cell 2A (SDU2A)) are documented in this report, and where applicable, the results are compared to the results for Sample Set 3. Relevant properties to demonstrate the aforementioned objectives include bulk density, porosity, saturated hydraulic conductivity (SHC), and radionuclide leaching behavior.« less

  5. Water-quality effects and characterization of indicators of onsite wastewater disposal systems in the east-central Black Hills area, South Dakota, 2006-08

    USGS Publications Warehouse

    Putnam, Larry D.; Hoogestraat, Galen K.; Sawyer, J. Foster

    2008-01-01

    Onsite wastewater disposal systems (OWDS) are used extensively in the Black Hills of South Dakota where many of the watersheds and aquifers are characterized by fractured or solution-enhanced bedrock with thin soil cover. A study was conducted during 2006-08 to characterize water-quality effects and indicators of OWDS. Water samples were collected and analyzed for potential indicators of OWDS, including chloride, bromide, boron, nitrite plus nitrate (NO2+NO3), ammonia, major ions, nutrients, selected trace elements, isotopes of nitrate, microbiological indicators, and organic wastewater compounds (OWCs). The microbiological indicators were fecal coliforms, Escherichia coli (E. coli), enterococci, Clostridium perfringens (C. perfringens), and coliphages. Sixty ground-water sampling sites were located either downgradient from areas of dense OWDS or in background areas and included 25 monitoring wells, 34 private wells, and 1 spring. Nine surface-water sampling sites were located on selected streams and tributaries either downstream or upstream from residential development within the Precambrian setting. Sampling results were grouped by their hydrogeologic setting: alluvial, Spearfish, Minnekahta, and Precambrian. Mean downgradient dissolved NO2+NO3 concentrations in ground water for the alluvial, Spearfish, Minnekahta, and Precambrian settings were 0.734, 7.90, 8.62, and 2.25 milligrams per liter (mg/L), respectively. Mean downgradient dissolved chloride concentrations in ground water for these settings were 324, 89.6, 498, and 33.2 mg/L, respectively. Mean downgradient dissolved boron concentrations in ground water for these settings were 736, 53, 64, and 43 micrograms per liter (ug/L), respectively. Mean dissolved surface-water concentrations for NO2+NO3, chloride, and boron for downstream sites were 0.222 mg/L, 32.1 mg/L, and 28 ug/L, respectively. Mean values of delta-15N and delta-18O (isotope ratios of 14N to 15N and 18O to 16O relative to standard ratios) for nitrate in ground-water samples were 10.4 and -2.0 per mil (0/100), respectively, indicating a relatively small contribution from synthetic fertilizer and probably a substantial contribution from OWDS. The surface-water sample with the highest dissolved NO2+NO3 concentration of 1.6 mg/L had a delta-15N value of 12.36 0/100, which indicates warm-blooded animals (including humans) as the nitrate source. Fecal coliforms were detected in downgradient ground water most frequently in the Spearfish (19 percent) and Minnekahta (9.7 percent) settings. E. coli was detected most frequently in the Minnekahta (29 percent) and Spearfish (13 percent) settings. Enterococci were detected more frequently than other microbiological indicators in all four settings. Fecal coliforms and E. coli were detected in 73 percent and 95 percent of all surface-water samples, respectively. Enterococci, coliphages (somatic), and C. perfringens were detected in 50, 70, and 50 percent of surface-water samples, respectively. Of the 62 OWC analytes, 12 were detected only in environmental samples, 10 were detected in at least one environmental and one blank sample (not necessarily companion pairs), 2 were detected only in blank samples, and 38 were not detected in any blank, environmental, or replicate sample from either ground or surface water. Eleven different organic compounds were detected in ground-water samples at eight different sites. The most frequently occurring compound was DEET, which was found in 32 percent of the environmental samples, followed by tetrachloroethene, which was detected in 20 percent of the samples. For surface-water samples, 16 organic compounds were detected in 9 of the 10 total samples. The compound with the highest occurrence in surface-water samples was camphor, which was detected in 50 percent of samples. The alluvial setting was characterized by relatively low dissolved NO2+NO3 concentrations, detection of ammonia nitrogen, and relatively high concentr

  6. Ambient-temperature incubation for the field detection of Escherichia coli in drinking water.

    PubMed

    Brown, J; Stauber, C; Murphy, J L; Khan, A; Mu, T; Elliott, M; Sobsey, M D

    2011-04-01

     Escherichia coli is the pre-eminent microbiological indicator used to assess safety of drinking water globally. The cost and equipment requirements for processing samples by standard methods may limit the scale of water quality testing in technologically less developed countries and other resource-limited settings, however. We evaluate here the use of ambient-temperature incubation in detection of E. coli in drinking water samples as a potential cost-saving and convenience measure with applications in regions with high (>25°C) mean ambient temperatures.   This study includes data from three separate water quality assessments: two in Cambodia and one in the Dominican Republic. Field samples of household drinking water were processed in duplicate by membrane filtration (Cambodia), Petrifilm™ (Cambodia) or Colilert® (Dominican Republic) on selective media at both standard incubation temperature (35–37°C) and ambient temperature, using up to three dilutions and three replicates at each dilution. Matched sample sets were well correlated with 80% of samples (n = 1037) within risk-based microbial count strata (E. coli CFU 100 ml−1 counts of <1, 1–10, 11–100, 101–1000, >1000), and a pooled coefficient of variation of 17% (95% CI 15–20%) for paired sample sets across all methods.   These results suggest that ambient-temperature incubation of E. coli in at least some settings may yield sufficiently robust data for water safety monitoring where laboratory or incubator access is limited.

  7. On the use of spectra from portable Raman and ATR-IR instruments in synthesis route attribution of a chemical warfare agent by multivariate modeling.

    PubMed

    Wiktelius, Daniel; Ahlinder, Linnea; Larsson, Andreas; Höjer Holmgren, Karin; Norlin, Rikard; Andersson, Per Ola

    2018-08-15

    Collecting data under field conditions for forensic investigations of chemical warfare agents calls for the use of portable instruments. In this study, a set of aged, crude preparations of sulfur mustard were characterized spectroscopically without any sample preparation using handheld Raman and portable IR instruments. The spectral data was used to construct Random Forest multivariate models for the attribution of test set samples to the synthetic method used for their production. Colored and fluorescent samples were included in the study, which made Raman spectroscopy challenging although fluorescence was diminished by using an excitation wavelength of 1064 nm. The predictive power of models constructed with IR or Raman data alone, as well as with combined data was investigated. Both techniques gave useful data for attribution. Model performance was enhanced when Raman and IR spectra were combined, allowing correct classification of 19/23 (83%) of test set spectra. The results demonstrate that data obtained with spectroscopy instruments amenable for field deployment can be useful in forensic studies of chemical warfare agents. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Gene expression signature in urine for diagnosing and assessing aggressiveness of bladder urothelial carcinoma.

    PubMed

    Mengual, Lourdes; Burset, Moisès; Ribal, María José; Ars, Elisabet; Marín-Aguilera, Mercedes; Fernández, Manuel; Ingelmo-Torres, Mercedes; Villavicencio, Humberto; Alcaraz, Antonio

    2010-05-01

    To develop an accurate and noninvasive method for bladder cancer diagnosis and prediction of disease aggressiveness based on the gene expression patterns of urine samples. Gene expression patterns of 341 urine samples from bladder urothelial cell carcinoma (UCC) patients and 235 controls were analyzed via TaqMan Arrays. In a first phase of the study, three consecutive gene selection steps were done to identify a gene set expression signature to detect and stratify UCC in urine. Subsequently, those genes more informative for UCC diagnosis and prediction of tumor aggressiveness were combined to obtain a classification system of bladder cancer samples. In a second phase, the obtained gene set signature was evaluated in a routine clinical scenario analyzing only voided urine samples. We have identified a 12+2 gene expression signature for UCC diagnosis and prediction of tumor aggressiveness on urine samples. Overall, this gene set panel had 98% sensitivity (SN) and 99% specificity (SP) in discriminating between UCC and control samples and 79% SN and 92% SP in predicting tumor aggressiveness. The translation of the model to the clinically applicable format corroborates that the 12+2 gene set panel described maintains a high accuracy for UCC diagnosis (SN = 89% and SP = 95%) and tumor aggressiveness prediction (SN = 79% and SP = 91%) in voided urine samples. The 12+2 gene expression signature described in urine is able to identify patients suffering from UCC and predict tumor aggressiveness. We show that a panel of molecular markers may improve the schedule for diagnosis and follow-up in UCC patients. Copyright 2010 AACR.

  9. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Meta-analysis of gene expression profiles associated with histological classification and survival in 829 ovarian cancer samples.

    PubMed

    Fekete, Tibor; Rásó, Erzsébet; Pete, Imre; Tegze, Bálint; Liko, István; Munkácsy, Gyöngyi; Sipos, Norbert; Rigó, János; Györffy, Balázs

    2012-07-01

    Transcriptomic analysis of global gene expression in ovarian carcinoma can identify dysregulated genes capable to serve as molecular markers for histology subtypes and survival. The aim of our study was to validate previous candidate signatures in an independent setting and to identify single genes capable to serve as biomarkers for ovarian cancer progression. As several datasets are available in the GEO today, we were able to perform a true meta-analysis. First, 829 samples (11 datasets) were downloaded, and the predictive power of 16 previously published gene sets was assessed. Of these, eight were capable to discriminate histology subtypes, and none was capable to predict survival. To overcome the differences in previous studies, we used the 829 samples to identify new predictors. Then, we collected 64 ovarian cancer samples (median relapse-free survival 24.5 months) and performed TaqMan Real Time Polimerase Chain Reaction (RT-PCR) analysis for the best 40 genes associated with histology subtypes and survival. Over 90% of subtype-associated genes were confirmed. Overall survival was effectively predicted by hormone receptors (PGR and ESR2) and by TSPAN8. Relapse-free survival was predicted by MAPT and SNCG. In summary, we successfully validated several gene sets in a meta-analysis in large datasets of ovarian samples. Additionally, several individual genes identified were validated in a clinical cohort. Copyright © 2011 UICC.

  11. Characteristics and Pathways of Long-Stay Patients in High and Medium Secure Settings in England; A Secondary Publication From a Large Mixed-Methods Study.

    PubMed

    Völlm, Birgit A; Edworthy, Rachel; Huband, Nick; Talbot, Emily; Majid, Shazmin; Holley, Jessica; Furtado, Vivek; Weaver, Tim; McDonald, Ruth; Duggan, Conor

    2018-01-01

    Background: Many patients experience extended stays within forensic care, but the characteristics of long-stay patients are poorly understood. Aims: To describe the characteristics of long-stay patients in high and medium secure settings in England. Method: Detailed file reviews provided clinical, offending and risk data for a large representative sample of 401 forensic patients from 2 of the 3 high secure settings and from 23 of the 57 medium secure settings in England on 1 April 2013. The threshold for long-stay status was defined as 5 years in medium secure care or 10 years in high secure care, or 15 years in a combination of high and medium secure settings. Results: 22% of patients in high security and 18% in medium security met the definition for "long-stay," with 20% staying longer than 20 years. Of the long-stay sample, 58% were violent offenders (22% both sexual and violent), 27% had been convicted for violent or sexual offences whilst in an institutional setting, and 26% had committed a serious assault on staff in the last 5 years. The most prevalent diagnosis was schizophrenia (60%) followed by personality disorder (47%, predominantly antisocial and borderline types); 16% were categorised as having an intellectual disability. Overall, 7% of the long-stay sample had never been convicted of any offence, and 16.5% had no index offence prompting admission. Although some significant differences were found between the high and medium secure samples, there were more similarities than contrasts between these two levels of security. The treatment pathways of these long-stay patients involved multiple moves between settings. An unsuccessful referral to a setting of lower security was recorded over the last 5 years for 33% of the sample. Conclusions: Long-stay patients accounted for one fifth of the forensic inpatient population in England in this representative sample. A significant proportion of this group remain unsettled. High levels of personality pathology and the risk of assaults on staff and others within the care setting are likely to impact on treatment and management. Further research into the treatment pathways of longer stay patients is warranted to understand the complex trajectories of this group.

  12. Characteristics and Pathways of Long-Stay Patients in High and Medium Secure Settings in England; A Secondary Publication From a Large Mixed-Methods Study

    PubMed Central

    Völlm, Birgit A.; Edworthy, Rachel; Huband, Nick; Talbot, Emily; Majid, Shazmin; Holley, Jessica; Furtado, Vivek; Weaver, Tim; McDonald, Ruth; Duggan, Conor

    2018-01-01

    Background: Many patients experience extended stays within forensic care, but the characteristics of long-stay patients are poorly understood. Aims: To describe the characteristics of long-stay patients in high and medium secure settings in England. Method: Detailed file reviews provided clinical, offending and risk data for a large representative sample of 401 forensic patients from 2 of the 3 high secure settings and from 23 of the 57 medium secure settings in England on 1 April 2013. The threshold for long-stay status was defined as 5 years in medium secure care or 10 years in high secure care, or 15 years in a combination of high and medium secure settings. Results: 22% of patients in high security and 18% in medium security met the definition for “long-stay,” with 20% staying longer than 20 years. Of the long-stay sample, 58% were violent offenders (22% both sexual and violent), 27% had been convicted for violent or sexual offences whilst in an institutional setting, and 26% had committed a serious assault on staff in the last 5 years. The most prevalent diagnosis was schizophrenia (60%) followed by personality disorder (47%, predominantly antisocial and borderline types); 16% were categorised as having an intellectual disability. Overall, 7% of the long-stay sample had never been convicted of any offence, and 16.5% had no index offence prompting admission. Although some significant differences were found between the high and medium secure samples, there were more similarities than contrasts between these two levels of security. The treatment pathways of these long-stay patients involved multiple moves between settings. An unsuccessful referral to a setting of lower security was recorded over the last 5 years for 33% of the sample. Conclusions: Long-stay patients accounted for one fifth of the forensic inpatient population in England in this representative sample. A significant proportion of this group remain unsettled. High levels of personality pathology and the risk of assaults on staff and others within the care setting are likely to impact on treatment and management. Further research into the treatment pathways of longer stay patients is warranted to understand the complex trajectories of this group. PMID:29713294

  13. Assessing the Alcohol-BMI Relationship in a US National Sample of College Students

    ERIC Educational Resources Information Center

    Barry, Adam E.; Piazza-Gardner, Anna K.; Holton, M. Kim

    2015-01-01

    Objective: This study sought to assess the body mass index (BMI)-alcohol relationship among a US national sample of college students. Design: Secondary data analysis using the Fall 2011 National College Health Assessment (NCHA). Setting: A total of 44 US higher education institutions. Methods: Participants included a national sample of college…

  14. Principal coordinate analysis assisted chromatographic analysis of bacterial cell wall collection: A robust classification approach.

    PubMed

    Kumar, Keshav; Cava, Felipe

    2018-04-10

    In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Scientific data interpolation with low dimensional manifold model

    DOE PAGES

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...

    2017-09-28

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  16. Scientific data interpolation with low dimensional manifold model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  17. [Study on discrimination of varieties of fire resistive coating for steel structure based on near-infrared spectroscopy].

    PubMed

    Xue, Gang; Song, Wen-qi; Li, Shu-chao

    2015-01-01

    In order to achieve the rapid identification of fire resistive coating for steel structure of different brands in circulating, a new method for the fast discrimination of varieties of fire resistive coating for steel structure by means of near infrared spectroscopy was proposed. The raster scanning near infrared spectroscopy instrument and near infrared diffuse reflectance spectroscopy were applied to collect the spectral curve of different brands of fire resistive coating for steel structure and the spectral data were preprocessed with standard normal variate transformation(standard normal variate transformation, SNV) and Norris second derivative. The principal component analysis (principal component analysis, PCA)was used to near infrared spectra for cluster analysis. The analysis results showed that the cumulate reliabilities of PC1 to PC5 were 99. 791%. The 3-dimentional plot was drawn with the scores of PC1, PC2 and PC3 X 10, which appeared to provide the best clustering of the varieties of fire resistive coating for steel structure. A total of 150 fire resistive coating samples were divided into calibration set and validation set randomly, the calibration set had 125 samples with 25 samples of each variety, and the validation set had 25 samples with 5 samples of each variety. According to the principal component scores of unknown samples, Mahalanobis distance values between each variety and unknown samples were calculated to realize the discrimination of different varieties. The qualitative analysis model for external verification of unknown samples is a 10% recognition ration. The results demonstrated that this identification method can be used as a rapid, accurate method to identify the classification of fire resistive coating for steel structure and provide technical reference for market regulation.

  18. [Tobacco quality analysis of producing areas of Yunnan tobacco using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Ma, Xiang; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2013-01-01

    In the present study, tobacco quality analysis of different producing areas was carried out applying spectrum projection and correlation methods. The group of industrial classification data was near-infrared (NIR) spectrum in 2010 year of middle parts of tobacco plant from Hongta Tobacco (Group) Co., Ltd. Twelve hundred seventy six superior tobacco leaf samples were collected from four producing areas, in which three areas from Yuxi, Chuxiong and Zhaotong, in Yunnan province all belong to tobacco varieties of K326 and one area from Dali belongs to tobacco varieties of Hongda. The conclusion showed that when the samples were divided into two parts by the ratio of 2 : 1 randomly as analysis and verification sets, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients by the first and second dimensional projection were all above 0.99. At the same time, The study discussed a method to get the quantitative similarity values of different producing areas samples. The similarity values were instructive in tobacco plant planning, quality management, acquisition of raw materials of tobacco and tobacco leaf blending.

  19. Attentional bias to threat in the general population is contingent on target competition, not on attentional control settings.

    PubMed

    Wirth, Benedikt Emanuel; Wentura, Dirk

    2018-04-01

    Dot-probe studies usually find an attentional bias towards threatening stimuli only in anxious participants. Here, we investigated under what conditions such a bias occurs in unselected samples. According to contingent-capture theory, an irrelevant cue only captures attention if it matches an attentional control setting. Therefore, we first tested the hypothesis that an attentional control setting tuned to threat must be activated in (non-anxious) individuals. In Experiment 1, we used a dot-probe task with a manipulation of attentional control settings ('threat' - set vs. control set). Surprisingly, we found an (anxiety-independent) attentional bias to angry faces that was not moderated by attentional control settings. Since we presented two stimuli (i.e., a target and a distractor) on the target screen in Experiment 1 (a necessity to realise the test of contingent capture), but most dot-probe studies only employ a single target, we conducted Experiment 2 to test the hypothesis that attentional bias in the general population is contingent on target competition. Participants performed a dot-probe task, involving presentation of a stand-alone target or a target competing with a distractor. We found an (anxiety-independent) attentional bias towards angry faces in the latter but not the former condition. This suggests that attentional bias towards angry faces in unselected samples is not contingent on attentional control settings but on target competition.

  20. An exploratory study of a text classification framework for Internet-based surveillance of emerging epidemics

    PubMed Central

    Torii, Manabu; Yin, Lanlan; Nguyen, Thang; Mazumdar, Chand T.; Liu, Hongfang; Hartley, David M.; Nelson, Noele P.

    2014-01-01

    Purpose Early detection of infectious disease outbreaks is crucial to protecting the public health of a society. Online news articles provide timely information on disease outbreaks worldwide. In this study, we investigated automated detection of articles relevant to disease outbreaks using machine learning classifiers. In a real-life setting, it is expensive to prepare a training data set for classifiers, which usually consists of manually labeled relevant and irrelevant articles. To mitigate this challenge, we examined the use of randomly sampled unlabeled articles as well as labeled relevant articles. Methods Naïve Bayes and Support Vector Machine (SVM) classifiers were trained on 149 relevant and 149 or more randomly sampled unlabeled articles. Diverse classifiers were trained by varying the number of sampled unlabeled articles and also the number of word features. The trained classifiers were applied to 15 thousand articles published over 15 days. Top-ranked articles from each classifier were pooled and the resulting set of 1337 articles was reviewed by an expert analyst to evaluate the classifiers. Results Daily averages of areas under ROC curves (AUCs) over the 15-day evaluation period were 0.841 and 0.836, respectively, for the naïve Bayes and SVM classifier. We referenced a database of disease outbreak reports to confirm that this evaluation data set resulted from the pooling method indeed covered incidents recorded in the database during the evaluation period. Conclusions The proposed text classification framework utilizing randomly sampled unlabeled articles can facilitate a cost-effective approach to training machine learning classifiers in a real-life Internet-based biosurveillance project. We plan to examine this framework further using larger data sets and using articles in non-English languages. PMID:21134784

  1. Association analysis of the beta-3 adrenergic receptor Trp64Arg (rs4994) polymorphism with urate and gout.

    PubMed

    Fatima, Tahzeeb; Altaf, Sara; Phipps-Green, Amanda; Topless, Ruth; Flynn, Tanya J; Stamp, Lisa K; Dalbeth, Nicola; Merriman, Tony R

    2016-02-01

    The Arg64 allele of variant rs4994 (Trp64Arg) in the β3-adrenergic receptor gene has been associated with increased serum urate and risk of gout. Our objective was to investigate the relationship of rs4994 with serum urate and gout in New Zealand European, Māori and Pacific subjects. A total of 1730 clinically ascertained gout cases and 2145 controls were genotyped for rs4994 by Taqman(®). Māori and Pacific subjects were subdivided into Eastern Polynesian (EP) and Western Polynesian (WP) sample sets. Publicly available genotype data from the Atherosclerosis Risk in Communities Study and the Framingham Heart Study were utilized for serum urate association analysis. Multivariate logistic and linear regression adjusted for potential confounders was carried out using R version 2.15.2. No significant association of the minor Arg64 (G) allele of rs4994 with gout was found in the combined Polynesian cohorts (OR = 0.98, P = 0.88), although there was evidence, after adjustment for renal disease, for association in both the WP (OR = 0.53, P = 0.03) and the lower Polynesian ancestry EP sample sets (OR = 1.86, P = 0.05). There was no evidence for association with gout in the European sample set (OR = 1.11, P = 0.57). However, the Arg64 allele was positively associated with urate in the WP data set (β = 0.036, P = 0.004, P Corrected = 0.032). Association of the Arg64 variant with increased urate in the WP sample set was consistent with the previous literature, although the protective effect of this variant with gout in WP was inconsistent. This association provides an etiological link between metabolic syndrome components and urate homeostasis.

  2. Hierarchical cluster analysis of technical replicates to identify interferents in untargeted mass spectrometry metabolomics.

    PubMed

    Caesar, Lindsay K; Kvalheim, Olav M; Cech, Nadja B

    2018-08-27

    Mass spectral data sets often contain experimental artefacts, and data filtering prior to statistical analysis is crucial to extract reliable information. This is particularly true in untargeted metabolomics analyses, where the analyte(s) of interest are not known a priori. It is often assumed that chemical interferents (i.e. solvent contaminants such as plasticizers) are consistent across samples, and can be removed by background subtraction from blank injections. On the contrary, it is shown here that chemical contaminants may vary in abundance across each injection, potentially leading to their misidentification as relevant sample components. With this metabolomics study, we demonstrate the effectiveness of hierarchical cluster analysis (HCA) of replicate injections (technical replicates) as a methodology to identify chemical interferents and reduce their contaminating contribution to metabolomics models. Pools of metabolites with varying complexity were prepared from the botanical Angelica keiskei Koidzumi and spiked with known metabolites. Each set of pools was analyzed in triplicate and at multiple concentrations using ultraperformance liquid chromatography coupled to mass spectrometry (UPLC-MS). Before filtering, HCA failed to cluster replicates in the data sets. To identify contaminant peaks, we developed a filtering process that evaluated the relative peak area variance of each variable within triplicate injections. These interferent peaks were found across all samples, but did not show consistent peak area from injection to injection, even when evaluating the same chemical sample. This filtering process identified 128 ions that appear to originate from the UPLC-MS system. Data sets collected for a high number of pools with comparatively simple chemical composition were highly influenced by these chemical interferents, as were samples that were analyzed at a low concentration. When chemical interferent masses were removed, technical replicates clustered in all data sets. This work highlights the importance of technical replication in mass spectrometry-based studies, and presents a new application of HCA as a tool for evaluating the effectiveness of data filtering prior to statistical analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Finnish Parents' Attitudes toward Entrepreneurship Education

    ERIC Educational Resources Information Center

    Räty, Hannu; Korhonen, Maija; Kasanen, Kati; Komulainen, Katri; Rautiainen, Riitta; Siivonen, Päivi

    2016-01-01

    This study set out to investigate parental attitudes toward entrepreneurship education as evaluative directing components of social representations. A nationwide sample of parents (N = 625) was asked to indicate their opinions on a set of statements about entrepreneurship education. The parents' attitudinal orientation suggested that they would…

  4. Professional School Counseling (PSC) Publication Pattern Review: A Meta-Study of Author and Article Characteristics from the First 15 Years

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Giguere, Monica; Glenn, Kacie; Ciarlone, Hallie

    2015-01-01

    Patterns of articles published in "Professional School Counseling" (PSC) from the first 15 volumes were reviewed in this meta-study. Author characteristics (e.g., sex, employment setting, nation of domicile) and article characteristics (e.g., topic, type, design, sample, sample size, participant type, statistical procedures and…

  5. The Relationship between Emotional Intelligence and Problem Solving Skills in Prospective Teachers

    ERIC Educational Resources Information Center

    Deniz, Sabahattin

    2013-01-01

    This study aims to investigate the relationship between emotional intelligence and problem solving. The sample set of the research was taken from the Faculty of Education of Mugla University by the random sampling method. The participants were 386 students--prospective teachers--(224 females; 182 males) who took part in the study voluntarily.…

  6. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  7. Dried blood spot measurement of pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) from a low-resource setting.

    PubMed

    Browne, J L; Schielen, P C J I; Belmouden, I; Pennings, J L A; Klipstein-Grobusch, K

    2015-06-01

    The objectives of the article is to compare pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) concentrations in dried blood spots (DBSs) with serum of samples obtained from a public hospital in a low-resource setting and to evaluate their stability. Serum and DBS samples were obtained by venipuncture and finger prick from 50 pregnant participants in a cohort study in a public hospital in Accra, Ghana. PAPP-A and β-hCG concentrations from serum and DBS were measured with an AutoDELFIA® (PerkinElmer, PerkinElmer, Turku, Finland) automatic immunoassay. Correlation and Passing-Bablok regression analyses were performed to compare marker levels. High correlation (>0.9) was observed for PAPP-A and β-hCG levels between various sampling techniques. The β-hCG concentration was stable between DBS and serum, PAPP-A concentration consistently lower in DBS. Our findings suggest that β-hCG can be reliably collected from DBS in low-resource tropical settings. The exact conditions of the clinical workflow necessary for reliable PAPP-A measurement in these settings need to be further developed in the future. These findings could have implications for prenatal screening programs feasibility in low-income and middle-income countries, as DBS provides an alternative minimally invasive sampling method, with advantages in sampling technique, stability, logistics, and potential application in low-resource settings. © 2015 John Wiley & Sons, Ltd.

  8. An industry consensus study on an HPLC fluorescence method for the determination of (±)-catechin and (±)-epicatechin in cocoa and chocolate products.

    PubMed

    Shumow, Laura; Bodor, Alison

    2011-07-05

    This manuscript describes the results of an HPLC study for the determination of the flavan-3-ol monomers, (±)-catechin and (±)-epicatechin, in cocoa and plain dark and milk chocolate products. The study was performed under the auspices of the National Confectioners Association (NCA) and involved the analysis of a series of samples by laboratories of five member companies using a common method. The method reported in this paper uses reversed phase HPLC with fluorescence detection to analyze (±)-epicatechin and (±)-catechin extracted with an acidic solvent from defatted cocoa and chocolate. In addition to a variety of cocoa and chocolate products, the sample set included a blind duplicate used to assess method reproducibility. All data were subjected to statistical analysis with outliers eliminated from the data set. The percent coefficient of variation (%CV) of the sample set ranged from approximately 7 to 15%. Further experimental details are described in the body of the manuscript and the results indicate the method is suitable for the determination of (±)-catechin and (±)-epicatechin in cocoa and chocolate products and represents the first collaborative study of this HPLC method for these compounds in these matrices.

  9. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  10. Study unique artistic lopburi province for design brass tea set of bantahkrayang community

    NASA Astrophysics Data System (ADS)

    Pliansiri, V.; Seviset, S.

    2017-07-01

    The objectives of this study were as follows: 1) to study the production process of handcrafted Brass Tea Set; and 2) to design and develop the handcrafted of Brass Tea Set. The process of design was started by mutual analytical processes and conceptual framework for product design, Quality Function Deployment, Theory of Inventive Problem Solving, Principles of Craft Design, and Principle of Reverse Engineering. The experts in field of both Industrial Product Design and Brass Handicraft Product, have evaluated the Brass Tea Set design and created prototype of Brass tea set by the sample of consumers who have ever bought the Brass Tea Set of Bantahkrayang Community on this research. The statistics methods used were percentage, mean ({{{\\overline X}} = }) and standard deviation (S.D.) 3. To assess consumer satisfaction toward of handcrafted Brass tea set was at the high level.

  11. Consideration of sample return and the exploration strategy for Mars

    NASA Technical Reports Server (NTRS)

    Bogard, D. C.; Duke, M. B.; Gibson, E. K.; Minear, J. W.; Nyquist, L. E.; Phinney, W. C.

    1979-01-01

    The scientific rationale and requirements for a Mars surface sample return were examined and the experience gained from the analysis and study of the returned lunar samples were incorporated into the science requirements and engineering design for the Mars sample return mission. The necessary data sets for characterizing Mars are presented. If further analyses of surface samples are to be made, the best available method is for the analysis to be conducted in terrestrial laboratories.

  12. Validation of reference genes aiming accurate normalization of qRT-PCR data in Dendrocalamus latiflorus Munro.

    PubMed

    Liu, Mingying; Jiang, Jing; Han, Xiaojiao; Qiao, Guirong; Zhuo, Renying

    2014-01-01

    Dendrocalamus latiflorus Munro distributes widely in subtropical areas and plays vital roles as valuable natural resources. The transcriptome sequencing for D. latiflorus Munro has been performed and numerous genes especially those predicted to be unique to D. latiflorus Munro were revealed. qRT-PCR has become a feasible approach to uncover gene expression profiling, and the accuracy and reliability of the results obtained depends upon the proper selection of stable reference genes for accurate normalization. Therefore, a set of suitable internal controls should be validated for D. latiflorus Munro. In this report, twelve candidate reference genes were selected and the assessment of gene expression stability was performed in ten tissue samples and four leaf samples from seedlings and anther-regenerated plants of different ploidy. The PCR amplification efficiency was estimated, and the candidate genes were ranked according to their expression stability using three software packages: geNorm, NormFinder and Bestkeeper. GAPDH and EF1α were characterized to be the most stable genes among different tissues or in all the sample pools, while CYP showed low expression stability. RPL3 had the optimal performance among four leaf samples. The application of verified reference genes was illustrated by analyzing ferritin and laccase expression profiles among different experimental sets. The analysis revealed the biological variation in ferritin and laccase transcript expression among the tissues studied and the individual plants. geNorm, NormFinder, and BestKeeper analyses recommended different suitable reference gene(s) for normalization according to the experimental sets. GAPDH and EF1α had the highest expression stability across different tissues and RPL3 for the other sample set. This study emphasizes the importance of validating superior reference genes for qRT-PCR analysis to accurately normalize gene expression of D. latiflorus Munro.

  13. The DINGO dataset: a comprehensive set of data for the SAMPL challenge

    NASA Astrophysics Data System (ADS)

    Newman, Janet; Dolezal, Olan; Fazio, Vincent; Caradoc-Davies, Tom; Peat, Thomas S.

    2012-05-01

    Part of the latest SAMPL challenge was to predict how a small fragment library of 500 commercially available compounds would bind to a protein target. In order to assess the modellers' work, a reasonably comprehensive set of data was collected using a number of techniques. These included surface plasmon resonance, isothermal titration calorimetry, protein crystallization and protein crystallography. Using these techniques we could determine the kinetics of fragment binding, the energy of binding, how this affects the ability of the target to crystallize, and when the fragment did bind, the pose or orientation of binding. Both the final data set and all of the raw images have been made available to the community for scrutiny and further work. This overview sets out to give the parameters of the experiments done and what might be done differently for future studies.

  14. Inference of combinatorial Boolean rules of synergistic gene sets from cancer microarray datasets.

    PubMed

    Park, Inho; Lee, Kwang H; Lee, Doheon

    2010-06-15

    Gene set analysis has become an important tool for the functional interpretation of high-throughput gene expression datasets. Moreover, pattern analyses based on inferred gene set activities of individual samples have shown the ability to identify more robust disease signatures than individual gene-based pattern analyses. Although a number of approaches have been proposed for gene set-based pattern analysis, the combinatorial influence of deregulated gene sets on disease phenotype classification has not been studied sufficiently. We propose a new approach for inferring combinatorial Boolean rules of gene sets for a better understanding of cancer transcriptome and cancer classification. To reduce the search space of the possible Boolean rules, we identify small groups of gene sets that synergistically contribute to the classification of samples into their corresponding phenotypic groups (such as normal and cancer). We then measure the significance of the candidate Boolean rules derived from each group of gene sets; the level of significance is based on the class entropy of the samples selected in accordance with the rules. By applying the present approach to publicly available prostate cancer datasets, we identified 72 significant Boolean rules. Finally, we discuss several identified Boolean rules, such as the rule of glutathione metabolism (down) and prostaglandin synthesis regulation (down), which are consistent with known prostate cancer biology. Scripts written in Python and R are available at http://biosoft.kaist.ac.kr/~ihpark/. The refined gene sets and the full list of the identified Boolean rules are provided in the Supplementary Material. Supplementary data are available at Bioinformatics online.

  15. The Role of Presented Objects in Deriving Color Preference Criteria from Psychophysical Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, Michael P.; Wei, Minchen

    Of the many “components” of a color rendering measure, one is perhaps the most important: the set of color samples (spectral reflectance functions) that are employed as a standardized means of evaluating and rating a light source. At the same time, a standardized set of color samples can never apply perfectly to a real space or a real set of observed objects, meaning there will always be some level of mismatch between the predicted and observed color shifts. This mismatch is important for lighting specifiers to consider, but even more critical for experiments that seek to evaluate the relationship betweenmore » color rendering measures and human perception. This article explores how the color distortions of three possible experimental object sets compare to the color distortions predicted using the color evaluation samples of IES TM-30-15 (TM-30). The experimental object sets include those from Royer and colleagues [2016], a set of produce (10 fruits and vegetables), and the X-rite Color Checker Classic. The differences are traced back to properties of the samples sets, such as the coverage of color space, average chroma level, and specific spectral features. The consequence of the differences, that the visual evaluation is based on color distortions that are substantially different from what is predicted, can lead to inaccurate criteria or models of a given perception, such as preference. To minimize the error in using criteria or models when specifying color rendering attributes for a given application, the criteria or models should be developed using a set of experimental objects that matches the typical objects of the application as closely as possible. Alternatively, if typical objects of an application cannot be reasonably determined, an object set that matches the distortions predicted by TM-30 as close as possible is likely to provide the most meaningful results.« less

  16. The evolution of phylogeographic data sets.

    PubMed

    Garrick, Ryan C; Bonatelli, Isabel A S; Hyseni, Chaz; Morales, Ariadna; Pelletier, Tara A; Perez, Manolo F; Rice, Edwin; Satler, Jordan D; Symula, Rebecca E; Thomé, Maria Tereza C; Carstens, Bryan C

    2015-03-01

    Empirical phylogeographic studies have progressively sampled greater numbers of loci over time, in part motivated by theoretical papers showing that estimates of key demographic parameters improve as the number of loci increases. Recently, next-generation sequencing has been applied to questions about organismal history, with the promise of revolutionizing the field. However, no systematic assessment of how phylogeographic data sets have changed over time with respect to overall size and information content has been performed. Here, we quantify the changing nature of these genetic data sets over the past 20 years, focusing on papers published in Molecular Ecology. We found that the number of independent loci, the total number of alleles sampled and the total number of single nucleotide polymorphisms (SNPs) per data set has improved over time, with particularly dramatic increases within the past 5 years. Interestingly, uniparentally inherited organellar markers (e.g. animal mitochondrial and plant chloroplast DNA) continue to represent an important component of phylogeographic data. Single-species studies (cf. comparative studies) that focus on vertebrates (particularly fish and to some extent, birds) represent the gold standard of phylogeographic data collection. Based on the current trajectory seen in our survey data, forecast modelling indicates that the median number of SNPs per data set for studies published by the end of the year 2016 may approach ~20,000. This survey provides baseline information for understanding the evolution of phylogeographic data sets and underscores the fact that development of analytical methods for handling very large genetic data sets will be critical for facilitating growth of the field. © 2015 John Wiley & Sons Ltd.

  17. Inhibition of Sodium Benzoate on Stainless Steel in Tropical Seawater

    NASA Astrophysics Data System (ADS)

    Seoh, S. Y.; Senin, H. B.; Nik, W. N. Wan; Amin, M. M.

    2007-05-01

    The inhibition of sodium benzoate for stainless steel controlling corrosion was studied in seawater at room temperature. Three sets of sample have been immersed in seawater containing sodium benzoate with the concentrations of 0.3M, 0.6M and 1.0M respectively. One set of sample has been immersed in seawater without adding any sodium benzoate. It was found that the highest corrosion rate was observed for the stainless steel with no inhibitor was added to the seawater. As the concentration of sodium benzoate being increased, the corrosion rate is decreases. Results show that by the addition of 1.0M of sodium benzoate in seawater samples, it giving ≥ 90% efficiencies.

  18. A sampling approach for predicting the eating quality of apples using visible-near infrared spectroscopy.

    PubMed

    Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B

    2013-12-01

    Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.

  19. The topology of large-scale structure. III - Analysis of observations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  20. The topology of large-scale structure. III - Analysis of observations. [in universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  1. Constructing a Reward-Related Quality of Life Statistic in Daily Life-a Proof of Concept Study Using Positive Affect.

    PubMed

    Verhagen, Simone J W; Simons, Claudia J P; van Zelst, Catherine; Delespaul, Philippe A E G

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a 'behavior setting') with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available.

  2. Validation of a Multimarker Model for Assessing Risk of Type 2 Diabetes from a Five-Year Prospective Study of 6784 Danish People (Inter99)

    PubMed Central

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-01-01

    Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324

  3. Validation of a multimarker model for assessing risk of type 2 diabetes from a five-year prospective study of 6784 Danish people (Inter99).

    PubMed

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-07-01

    Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.

  4. ASTM clustering for improving coal analysis by near-infrared spectroscopy.

    PubMed

    Andrés, J M; Bona, M T

    2006-11-15

    Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.

  5. Evaluating a Psychology Graduate Student Peer Mentoring Program

    ERIC Educational Resources Information Center

    Fleck, Christina; Mullins, Morell E.

    2012-01-01

    Research on mentoring outcomes and characteristics of various types of mentoring programs in different settings is limited. The present study sampled 39 graduate students at a small Midwestern university to evaluate peer mentoring in a graduate school setting. Mentoring function and outcome relationships as well as program characteristics were…

  6. Evaluation of Elecsys Syphilis Assay for Routine and Blood Screening and Detection of Early Infection

    PubMed Central

    Kremastinou, J.; Polymerou, V.; Lavranos, D.; Aranda Arrufat, A.; Harwood, J.; Martínez Lorenzo, M. J.; Ng, K. P.; Queiros, L.; Vereb, I.

    2016-01-01

    Treponema pallidum infections can have severe complications if not diagnosed and treated at an early stage. Screening and diagnosis of syphilis require assays with high specificity and sensitivity. The Elecsys Syphilis assay is an automated treponemal immunoassay for the detection of antibodies against T. pallidum. The performance of this assay was investigated previously in a multicenter study. The current study expands on that evaluation in a variety of diagnostic settings and patient populations, at seven independent laboratories. The samples included routine diagnostic samples, blood donation samples, samples from patients with confirmed HIV infections, samples from living organ or bone marrow donors, and banked samples, including samples previously confirmed as syphilis positive. This study also investigated the seroconversion sensitivity of the assay. With a total of 1,965 syphilis-negative routine diagnostic samples and 5,792 syphilis-negative samples collected from blood donations, the Elecsys Syphilis assay had specificity values of 99.85% and 99.86%, respectively. With 333 samples previously identified as syphilis positive, the sensitivity was 100% regardless of disease stage. The assay also showed 100% sensitivity and specificity with samples from 69 patients coinfected with HIV. The Elecsys Syphilis assay detected infection in the same bleed or earlier, compared with comparator assays, in a set of sequential samples from a patient with primary syphilis. In archived serial blood samples collected from 14 patients with direct diagnoses of primary syphilis, the Elecsys Syphilis assay detected T. pallidum antibodies for 3 patients for whom antibodies were not detected with the Architect Syphilis TP assay, indicating a trend for earlier detection of infection, which may have the potential to shorten the time between infection and reactive screening test results. PMID:27358468

  7. Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.

    PubMed

    Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping

    2014-02-15

    Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.

  8. Forensic strategy to ensure the quality of sequencing data of mitochondrial DNA in highly degraded samples.

    PubMed

    Adachi, Noboru; Umetsu, Kazuo; Shojo, Hideki

    2014-01-01

    Mitochondrial DNA (mtDNA) is widely used for DNA analysis of highly degraded samples because of its polymorphic nature and high number of copies in a cell. However, as endogenous mtDNA in deteriorated samples is scarce and highly fragmented, it is not easy to obtain reliable data. In the current study, we report the risks of direct sequencing mtDNA in highly degraded material, and suggest a strategy to ensure the quality of sequencing data. It was observed that direct sequencing data of the hypervariable segment (HVS) 1 by using primer sets that generate an amplicon of 407 bp (long-primer sets) was different from results obtained by using newly designed primer sets that produce an amplicon of 120-139 bp (mini-primer sets). The data aligned with the results of mini-primer sets analysis in an amplicon length-dependent manner; the shorter the amplicon, the more evident the endogenous sequence became. Coding region analysis using multiplex amplified product-length polymorphisms revealed the incongruence of single nucleotide polymorphisms between the coding region and HVS 1 caused by contamination with exogenous mtDNA. Although the sequencing data obtained using long-primer sets turned out to be erroneous, it was unambiguous and reproducible. These findings suggest that PCR primers that produce amplicons shorter than those currently recognized should be used for mtDNA analysis in highly degraded samples. Haplogroup motif analysis of the coding region and HVS should also be performed to improve the reliability of forensic mtDNA data. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Synchronizing data from irregularly sampled sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uluyol, Onder

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  10. The Context Dependency of the Self-Report Version of the Strength and Difficulties Questionnaire (SDQ): A Cross-Sectional Study between Two Administration Settings

    PubMed Central

    Hoofs, H.; Jansen, N. W. H.; Mohren, D. C. L.; Jansen, M. W. J.; Kant, I. J.

    2015-01-01

    Background The Strength and Difficulties Questionnaire (SDQ) is a screening instrument for psychosocial problems in children and adolescents, which is applied in “individual” and “collective” settings. Assessment in the individual setting is confidential for clinical applications, such as preventive child healthcare, while assessment in the collective setting is anonymous and applied in (epidemiological) research. Due to administration differences between the settings it remains unclear whether results and conclusions actually can be used interchangeably. This study therefore aims to investigate whether the SDQ is invariant across settings. Methods Two independent samples were retrieved (mean age = 14.07 years), one from an individual setting (N = 6,594) and one from a collective setting (N = 4,613). The SDQ was administered in the second year of secondary school in both settings. Samples come from the same socio-geographic population in the Netherlands. Results Confirmatory factor analysis showed that the SDQ was measurement invariant/equivalent across settings and gender. On average, children in the individual setting scored lower on total difficulties (mean difference = 2.05) and the psychosocial problems subscales compared to those in the collective setting. This was also reflected in the cut-off points for caseness, defined by the 90th percentiles, which were lower in the individual setting. Using cut-off points from the collective in the individual setting therefore resulted in a small number of cases, 2 to 3%, while ∼10% is expected. Conclusion The SDQ has the same connotation across the individual and collective setting. The observed structural differences regarding the mean scores, however, undermine the validity of the cross-use of absolute SDQ-scores between these settings. Applying cut-off scores from the collective setting in the individual setting could, therefore, result in invalid conclusions and potential misuse of the instrument. To correctly apply cut-off scores these should be retrieved from the applied setting. PMID:25886464

  11. First evidence of dengue infection in domestic dogs living in different ecological settings in Thailand.

    PubMed

    Thongyuan, Suporn; Kittayapong, Pattamaporn

    2017-01-01

    Dengue is a vector-borne disease transmitted by Aedes mosquitoes. It is considered an important public health problem in many countries worldwide. However, only a few studies have been conducted on primates and domestic animals that could potentially be a reservoir of dengue viruses. Since domestic dogs share both habitats and vectors with humans, this study aimed to investigate whether domestic dogs living in different ecological settings in dengue endemic areas in Thailand could be naturally infected with dengue viruses. Serum samples were collected from domestic dogs in three different ecological settings of Thailand: urban dengue endemic areas of Nakhon Sawan Province; rubber plantation areas of Rayong Province; and Koh Chang, an island tourist spot of Trat Province. These samples were screened for dengue viral genome by using semi-nested RT-PCR. Positive samples were then inoculated in mosquito and dog cell lines for virus isolation. Supernatant collected from cell culture was tested for the presence of dengue viral genome by semi-nested RT-PCR, then double-strand DNA products were double-pass custom-sequenced. Partial nucleotide sequences were aligned with the sequences already recorded in GenBank, and a phylogenetic tree was constructed. In the urban setting, 632 domestic dog serum samples were screened for dengue virus genome by RT-PCR, and six samples (0.95%) tested positive for dengue virus. Four out of six dengue viruses from positive samples were successfully isolated. Dengue virus serotype 2 and serotype 3 were found to have circulated in domestic dog populations. One of 153 samples (0.65%) collected from the rubber plantation area showed a PCR-positive result, and dengue serotype 3 was successfully isolated. Partial gene phylogeny revealed that the isolated dengue viruses were closely related to those strains circulating in human populations. None of the 71 samples collected from the island tourist spot showed a positive result. We concluded that domestic dogs can be infected with dengue virus strains circulating in dengue endemic areas. The role of domestic dogs in dengue transmission needs to be further investigated, i.e., whether they are potential reservoirs or incidental hosts of dengue viruses.

  12. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  13. X-ray fluorescence analysis of Mexican varieties of dried chili peppers II: Commercial and home-grown specimens

    NASA Astrophysics Data System (ADS)

    Romero-Dávila, E.; Miranda, J.; Pineda, J. C.

    2015-07-01

    Elemental analyses of samples of Mexican varieties of dried chili peppers were carried out using X-ray Fluorescence (XRF). Several specimens of Capsicum annuum L., Capsicum chinense, and Capsicum pubescens were analyzed and the results compared to previous studies of elemental contents in other varieties of Capsicum annuum (ancho, morita, chilpotle, guajillo, pasilla, and árbol). The first set of samples was bought packaged in markets. In the present work, the study focuses on home-grown samples of the árbol and chilpotle varieties, commercial habanero (Capsicum chinense), as well as commercial and home-grown specimens of manzano (Capsicum pubescencs). Samples were freeze dried and pelletized. XRF analyses were carried out using a spectrometer based on an Rh X-ray tube, using a Si-PIN detector. The system detection calibration was performed through the analysis of the NIST certified reference materials 1547 (peach leaves) and 1574 (tomato leaves), while accuracy was checked with the reference material 1571 (orchard leaves). Elemental contents of all elements in the new set of samples were similar to those of the first group. Nevertheless, it was found that commercial samples contain high amounts of Br, while home-grown varieties do not.

  14. Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope

    NASA Astrophysics Data System (ADS)

    Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.

    2007-01-01

    X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.

  15. Raman Studies on Pre- and Post-Processed CVD Graphene Films Grown under Various Nitrogen Carrier Gas Flows

    NASA Astrophysics Data System (ADS)

    Beh, K. P.; Yam, F. K.; Abdalrheem, Raed; Ng, Y. Z.; Suhaimi, F. H. A.; Lim, H. S.; Mat Jafri, M. Z.

    2018-04-01

    In this work, graphene films were grown on copper substrates using chemical vapour deposition method under various N2 carrier flow rate. The samples were characterized using Raman spectroscopy. Three sets of Raman measurements have been performed: graphene/Cu (as-grown samples), pre-annealed graphene/glass, and post-annealed graphene/glass. It was found that the Raman spectra of graphene/Cu samples possessed a hump-shaped baseline, additionally higher signal-to-noise ratio (SNR) that leads to attenuation graphene-related bands. Significant improvement of SNR and flat baseline were observed for graphene films transferred on glass substrate. Further analysis on the remaining sets of Raman spectra highlighted minute traces of polymethyl methacrylate (PMMA) could yield misleading results. Hence, the set of Raman spectra on annealed graphene/glass samples would be suitable in further elucidating the effects of N2 carrier flow towards graphene growth. From there, higher N2 flow implied dilution of methanol/H2 mixture, limiting interactions between reactants and substrate. This leads to smaller crystallite size and lesser graphene layers.

  16. Mode synthesizing atomic force microscopy and mode-synthesizing sensing

    DOEpatents

    Passian, Ali; Thundat, Thomas George; Tetard, Laurene

    2013-05-17

    A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.

  17. Mode-synthesizing atomic force microscopy and mode-synthesizing sensing

    DOEpatents

    Passain, Ali; Thundat, Thomas George; Tetard, Laurene

    2014-07-22

    A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.

  18. Results for five sets of forensic genetic markers studied in a Greek population sample.

    PubMed

    Tomas, C; Skitsa, I; Steinmeier, E; Poulsen, L; Ampati, A; Børsting, C; Morling, N

    2015-05-01

    A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex®, Argus X-12 and PowerPlex® Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm-Šidák correction. Statistically significant (P<0.05) levels of linkage disequilibrium were observed between markers within two of the studied X-chromosome linkage groups. AMOVA analyses of the five sets of markers did not show population structure when the individuals were grouped according to their geographic origin. The Greek population grouped closely to the other European populations measured by F(ST)(*) distances. The match probability ranged from a value of 1 in 2×10(7) males by using haplotype frequencies of four X-chromosome haplogroups in males to 1 in 1.73×10(21) individuals for 16 autosomal STRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    PubMed

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  20. How important are autonomy and work setting to nurse practitioners' job satisfaction?

    PubMed

    Athey, Erin K; Leslie, Mayri Sagady; Briggs, Linda A; Park, Jeongyoung; Falk, Nancy L; Pericak, Arlene; El-Banna, Majeda M; Greene, Jessica

    2016-06-01

    Nurse practitioners (NPs) have reported aspects of their jobs that they are more and less satisfied with. However, few studies have examined the factors that predict overall job satisfaction. This study uses a large national sample to examine the extent to which autonomy and work setting predict job satisfaction. The 2012 National Sample Survey of Nurse Practitioners (n = 8311) was used to examine bivariate and multivariate relationships between work setting and three autonomy variables (independent billing practices, having one's NP skills fully utilized, and relationship with physician), and job satisfaction. NPs working in primary care reported the highest levels of autonomy across all three autonomy measures, while those working in hospital surgical settings reported the lowest levels. Autonomy, specifically feeling one's NP skills were fully utilized, was the factor most predictive of satisfaction. In multivariate analyses, those who strongly agreed their skills were being fully utilized had satisfaction scores almost one point higher than those who strongly disagreed. Work setting was only marginally related to job satisfaction. In order to attract and retain NPs in the future, healthcare organizations should ensure that NPs' skills are being fully utilized. ©2015 American Association of Nurse Practitioners.

  1. Goal Setting to Promote a Health Lifestyle.

    PubMed

    Paxton, Raheem J; Taylor, Wendell C; Hudnall, Gina Evans; Christie, Juliette

    2012-01-01

    The purpose of this parallel-group study was to determine whether a feasibility study based on newsletters and telephone counseling would improve goal- setting constructs; physical activity (PA); and fruit and vegetable (F & V) intake in a sample of older adults. Forty-three older adults ( M age = 70 years, >70% Asian, 54% female) living in Honolulu, Hawaii were recruited and randomly assigned to either a PA or F & V intake condition. All participants completed measures of PA, F & V intake, and goal setting mechanisms (i.e., specificity, difficulty, effort, commitment, and persistence) at baseline and 8-weeks. Paired t -tests were used to evaluate changes across time. We found that F & V participants significantly increased F & V intake and mean scores of goal specificity, effort, commitment, and persistence (all p < .05). No statistically significant changes in PA or goal setting mechanisms were observed for participants in the PA condition. Overall, our results show that a short-term intervention using newsletters and motivational calls based on goal- setting theory was effective in improving F & V intake; however, more research is needed to determine whether these strategies are effective for improving PA among a multiethnic sample of older adults.

  2. NHEXAS PHASE I ARIZONA STUDY--METALS IN WATER ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Water data set contains analytical results for measurements of up to 11 metals in 314 water samples over 211 households. Sample collection was undertaken at the tap and any additional drinking water source used extensively within each residence. The primary metals...

  3. The Mass Spectrometric Ortho Effect Studied for All 209 PCB Congeners

    EPA Science Inventory

    A method for the determination of polychlorinated biphenyls (PCBs) in caulk was developed; with application to a set of caulk and window glazing material samples. This method was evaluated by analyzing a combination of 47 samples of caulk, glazing materials, and including quality...

  4. NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...

  5. Skills Acquisition in Plantain Flour Processing Enterprises: A Validation of Training Modules for Senior Secondary Schools

    ERIC Educational Resources Information Center

    Udofia, Nsikak-Abasi; Nlebem, Bernard S.

    2013-01-01

    This study was to validate training modules that can help provide requisite skills for Senior Secondary school students in plantain flour processing enterprises for self-employment and to enable them pass their examination. The study covered Rivers State. Purposive sampling technique was used to select a sample size of 205. Two sets of structured…

  6. Reading in Class & out of Class: An Experience Sampling Method Study

    ERIC Educational Resources Information Center

    Shumow, Lee; Schmidt, Jennifer A.; Kackar, Hayal

    2008-01-01

    This study described and compared the reading of sixth and eighth grade students both in and out of school using a unique data set collected with the Experience Sampling Method (ESM). On average, students read forty minutes a day out of class and seventeen minutes a day in class indicating that reading is a common leisure practice for…

  7. The Social Interaction Anxiety Scale (SIAS) and the Social Phobia Scale (SPS): a comparison of two short-form versions.

    PubMed

    Fergus, Thomas A; Valentiner, David P; Kim, Hyun-Soo; McGrath, Patrick B

    2014-12-01

    The widespread use of Mattick and Clarke's (1998) Social Interaction Anxiety Scale (SIAS) and Social Phobia Scale (SPS) led 2 independent groups of researchers to develop short forms of these measures (Fergus, Valentiner, McGrath, Gier-Lonsway, & Kim, 2012; Peters, Sunderland, Andrews, Rapee, & Mattick, 2012). This 3-part study examined the psychometric properties of Fergus et al.'s and Peters et al.'s short forms of the SIAS and SPS using an American nonclinical adolescent sample in Study 1 (N = 98), American patient sample with an anxiety disorder in Study 2 (N = 117), and both a South Korean college student sample (N = 341) and an American college student sample (N = 550) in Study 3. Scores on both sets of short forms evidenced adequate internal consistency, interitem correlations, and measurement invariance. Scores on Fergus et al.'s short forms, particularly their SIAS short form, tended to capture more unique variance in scores of criterion measures than did scores on Peters et al.'s short forms. Implications for the use of these 2 sets of short forms are discussed. (c) 2014 APA, all rights reserved.

  8. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Community Heavy Metal Exposure, San Francisco, California

    NASA Astrophysics Data System (ADS)

    Chavez, A.; Devine, M.; Ho, T.; Zapata, I.; Bissell, M.; Neiss, J.

    2008-12-01

    Heavy metals are natural elements that generally occur in minute concentrations in the earth's crust. While some of these elements, in small quantities, are vital to life, most are harmful in larger doses. Various industrial and agricultural processes can result in dangerously high concentrations of heavy metals in our environment. Consequently, humans can be exposed to unsafe levels of these elements via the air we breathe, the water and food we consume, and the many products we use. During a two week study we collected numerous samples of sediments, water, food, and household items from around the San Francisco Bay Area that represent industrial, agricultural, and urban/residential settings. We analyzed these samples for Mercury (Hg), Lead (Pb), and Arsenic (As). Our goal was to examine the extent of our exposure to heavy metals in our daily lives. We discovered that many of the common foods and materials in our lives have become contaminated with unhealthy concentrations of these metals. Of our food samples, many exceeded the EPA's Maximum Contaminant Levels (MCL) set for each metal. Meats (fish, chicken, and beef) had higher amounts of each metal than did non-meat items. Heavy metals were also prevalent in varying concentrations in the environment. While many of our samples exceeded the EPA's Sediment Screening Level (SSL) for As, only two other samples surpassed the SSL set for Pb, and zero of our samples exceeded the SSL for Hg. Because of the serious health effects that can result from over-exposure to heavy metals, the information obtained in this study should be used to influence our future dietary and recreational habits.

  10. Temporal variability of indoor air concentrations under natural conditions in a house overlying a dilute chlorinated solvent groundwater plume.

    PubMed

    Holton, Chase; Luo, Hong; Dahlen, Paul; Gorder, Kyle; Dettenmaier, Erik; Johnson, Paul C

    2013-01-01

    Current vapor intrusion (VI) pathway assessment heavily weights concentrations from infrequent (monthly-seasonal) 24 h indoor air samples. This study collected a long-term and high-frequency data set that can be used to assess indoor air sampling strategies for answering key pathway assessment questions like: "Is VI occurring?", and "Will VI impacts exceed thresholds of concern?". Indoor air sampling was conducted for 2.5 years at 2-4 h intervals in a house overlying a dilute chlorinated solvent plume (10-50 μg/L TCE). Indoor air concentrations varied by 3 orders of magnitude (<0.01-10 ppbv TCE) with two recurring behaviors. The VI-active behavior, which was prevalent in fall, winter, and spring involved time-varying impacts intermixed with sporadic periods of inactivity; the VI-dormant behavior, which was prevalent in the summer, involved long periods of inactivity with sporadic VI impacts. These data were used to study outcomes of three simple sparse data sampling plans; the probabilities of false-negative and false-positive decisions were dependent on the ratio of the (action level/true mean of the data), the number of exceedances needed, and the sampling strategy. The analysis also suggested a significant potential for poor characterization of long-term mean concentrations with sparse sampling plans. The results point to a need for additional dense data sets and further investigation into the robustness of possible VI assessment paradigms. As this is the first data set of its kind, it is unknown if the results are representative of other VI-sites.

  11. Evaluation of endogenous control genes for gene expression studies across multiple tissues and in the specific sets of fat- and muscle-type samples of the pig.

    PubMed

    Gu, Y R; Li, M Z; Zhang, K; Chen, L; Jiang, A A; Wang, J Y; Li, X W

    2011-08-01

    To normalize a set of quantitative real-time PCR (q-PCR) data, it is essential to determine an optimal number/set of housekeeping genes, as the abundance of housekeeping genes can vary across tissues or cells during different developmental stages, or even under certain environmental conditions. In this study, of the 20 commonly used endogenous control genes, 13, 18 and 17 genes exhibited credible stability in 56 different tissues, 10 types of adipose tissue and five types of muscle tissue, respectively. Our analysis clearly showed that three optimal housekeeping genes are adequate for an accurate normalization, which correlated well with the theoretical optimal number (r ≥ 0.94). In terms of economical and experimental feasibility, we recommend the use of the three most stable housekeeping genes for calculating the normalization factor. Based on our results, the three most stable housekeeping genes in all analysed samples (TOP2B, HSPCB and YWHAZ) are recommended for accurate normalization of q-PCR data. We also suggest that two different sets of housekeeping genes are appropriate for 10 types of adipose tissue (the HSPCB, ALDOA and GAPDH genes) and five types of muscle tissue (the TOP2B, HSPCB and YWHAZ genes), respectively. Our report will serve as a valuable reference for other studies aimed at measuring tissue-specific mRNA abundance in porcine samples. © 2011 Blackwell Verlag GmbH.

  12. Understanding the role of conscientiousness in healthy aging: where does the brain come in?

    PubMed

    Patrick, Christopher J

    2014-05-01

    In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  13. Examination of factor structure for the consumers' responses to the Value Consciousness Scale.

    PubMed

    Conrad, C A; Williams, J R

    2000-12-01

    The psychometric properties of the Value Consciousness Scale developed by Lichtenstein, Netemeyer, and Burton in 1990 were examined in a retail grocery study (N = 497). Original assessment of scale properties was undertaken using two convenience samples in a nonretail setting and additional scale performance has been documented by the scale authors. This study furthers previous research by (1) examining performance on the items in the retail grocery setting and (2) utilizing an appropriately rigorous sampling procedure. A confirmatory factor analysis indicated that the Value Consciousness Scale does not exhibit unidimensional properties, and one must be cautious if this scale is used in applications of market segmentation until further clarification can be provided.

  14. RNA transcriptional biosignature analysis for identifying febrile infants with serious bacterial infections in the emergency department: a feasibility study.

    PubMed

    Mahajan, Prashant; Kuppermann, Nathan; Suarez, Nicolas; Mejias, Asuncion; Casper, Charlie; Dean, J Michael; Ramilo, Octavio

    2015-01-01

    To develop the infrastructure and demonstrate the feasibility of conducting microarray-based RNA transcriptional profile analyses for the diagnosis of serious bacterial infections in febrile infants 60 days and younger in a multicenter pediatric emergency research network. We designed a prospective multicenter cohort study with the aim of enrolling more than 4000 febrile infants 60 days and younger. To ensure success of conducting complex genomic studies in emergency department (ED) settings, we established an infrastructure within the Pediatric Emergency Care Applied Research Network, including 21 sites, to evaluate RNA transcriptional profiles in young febrile infants. We developed a comprehensive manual of operations and trained site investigators to obtain and process blood samples for RNA extraction and genomic analyses. We created standard operating procedures for blood sample collection, processing, storage, shipping, and analyses. We planned to prospectively identify, enroll, and collect 1 mL blood samples for genomic analyses from eligible patients to identify logistical issues with study procedures. Finally, we planned to batch blood samples and determined RNA quantity and quality at the central microarray laboratory and organized data analysis with the Pediatric Emergency Care Applied Research Network data coordinating center. Below we report on establishment of the infrastructure and the feasibility success in the first year based on the enrollment of a limited number of patients. We successfully established the infrastructure at 21 EDs. Over the first 5 months we enrolled 79% (74 of 94) of eligible febrile infants. We were able to obtain and ship 1 mL of blood from 74% (55 of 74) of enrolled participants, with at least 1 sample per participating ED. The 55 samples were shipped and evaluated at the microarray laboratory, and 95% (52 of 55) of blood samples were of adequate quality and contained sufficient RNA for expression analysis. It is possible to create a robust infrastructure to conduct genomic studies in young febrile infants in the context of a multicenter pediatric ED research setting. The sufficient quantity and high quality of RNA obtained suggests that whole blood transcriptional profile analysis for the diagnostic evaluation of young febrile infants can be successfully performed in this setting.

  15. SKATE: a docking program that decouples systematic sampling from scoring.

    PubMed

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  16. Differential relationships between set-shifting abilities and dimensions of insight in schizophrenia.

    PubMed

    Diez-Martin, J; Moreno-Ortega, M; Bagney, A; Rodriguez-Jimenez, R; Padilla-Torres, D; Sanchez-Morla, E M; Santos, J L; Palomo, T; Jimenez-Arriero, M A

    2014-01-01

    To assess insight in a large sample of patients with schizophrenia and to study its relationship with set shifting as an executive function. The insight of a sample of 161 clinically stable, community-dwelling patients with schizophrenia was evaluated by means of the Scale to Assess Unawareness of Mental Disorder (SUMD). Set shifting was measured using the Trail-Making Test time required to complete part B minus the time required to complete part A (TMT B-A). Linear regression analyses were performed to investigate the relationships of TMT B-A with different dimensions of general insight. Regression analyses revealed a significant association between TMT B-A and two of the SUMD general components: 'awareness of mental disorder' and 'awareness of the efficacy of treatment'. The 'awareness of social consequences' component was not significantly associated with set shifting. Our results show a significant relation between set shifting and insight, but not in the same manner for the different components of the SUMD general score. Copyright © 2013 S. Karger AG, Basel.

  17. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  18. Detection of malaria infection in blood transfusion: a comparative study among real-time PCR, rapid diagnostic test and microscopy: sensitivity of Malaria detection methods in blood transfusion.

    PubMed

    Hassanpour, Gholamreza; Mohebali, Mehdi; Raeisi, Ahmad; Abolghasemi, Hassan; Zeraati, Hojjat; Alipour, Mohsen; Azizi, Ebrahim; Keshavarz, Hossein

    2011-06-01

    The transmission of malaria by blood transfusion was one of the first transfusion-transmitted infections recorded in the world. Transfusion-transmitted malaria may lead to serious problems because infection with Plasmodium falciparum may cause rapidly fatal death. This study aimed to compare real-time polymerase chain reaction (real-time PCR) with rapid diagnostic test (RDT) and light microscopy for the detection of Plasmodium spp. in blood transfusion, both in endemic and non-endemic areas of malaria disease in Iran. Two sets of 50 blood samples were randomly collected. One set was taken from blood samples donated in blood bank of Bandar Abbas, a city located in a malarious-endemic area, and the other set from Tehran, a non-endemic one. Light microscopic examination on both thin and thick smears, RDTs, and real-time PCR were performed on the blood samples and the results were compared. Thin and thick light microscopic examinations of all samples as well as RDT results were negative for Plasmodium spp. Two blood samples from endemic area were positive only with real-time PCR. It seems that real-time PCR as a highly sensitive method can be helpful for the confirmation of malaria infection in different units of blood transfusion organization especially in malaria-endemic areas where the majority of donors may be potentially infected with malaria parasites.

  19. Norms governing urban African American adolescents’ sexual and substance-using behavior

    PubMed Central

    Dolcini, M. Margaret; Catania, Joseph A.; Harper, Gary W.; Watson, Susan E.; Ellen, Jonathan M.; Towner, Senna L.

    2013-01-01

    Using a probability-based neighborhood sample of urban African American youth and a sample of their close friends (N = 202), we conducted a one-year longitudinal study to examine key questions regarding sexual and drug using norms. The results provide validation of social norms governing sexual behavior, condom use, and substance use among friendship groups. These norms had strong to moderate homogeneity; and both normative strength and homogeneity were relatively stable over a one-year period independent of changes in group membership. The data further suggest that sex and substance using norms may operate as a normative set. Similar to studies of adults, we identified three distinct “norm-based” social strata in our sample. Together, our findings suggest that the norms investigated are valid targets for health promotion efforts, and such efforts may benefit from tailoring programs to the normative sets that make up the different social strata in a given adolescent community. PMID:23072891

  20. Tissue-aware RNA-Seq processing and normalization for heterogeneous and sparse data.

    PubMed

    Paulson, Joseph N; Chen, Cho-Yi; Lopes-Ramos, Camila M; Kuijjer, Marieke L; Platig, John; Sonawane, Abhijeet R; Fagny, Maud; Glass, Kimberly; Quackenbush, John

    2017-10-03

    Although ultrahigh-throughput RNA-Sequencing has become the dominant technology for genome-wide transcriptional profiling, the vast majority of RNA-Seq studies typically profile only tens of samples, and most analytical pipelines are optimized for these smaller studies. However, projects are generating ever-larger data sets comprising RNA-Seq data from hundreds or thousands of samples, often collected at multiple centers and from diverse tissues. These complex data sets present significant analytical challenges due to batch and tissue effects, but provide the opportunity to revisit the assumptions and methods that we use to preprocess, normalize, and filter RNA-Seq data - critical first steps for any subsequent analysis. We find that analysis of large RNA-Seq data sets requires both careful quality control and the need to account for sparsity due to the heterogeneity intrinsic in multi-group studies. We developed Yet Another RNA Normalization software pipeline (YARN), that includes quality control and preprocessing, gene filtering, and normalization steps designed to facilitate downstream analysis of large, heterogeneous RNA-Seq data sets and we demonstrate its use with data from the Genotype-Tissue Expression (GTEx) project. An R package instantiating YARN is available at http://bioconductor.org/packages/yarn .

  1. Salivary hormone and immune responses to three resistance exercise schemes in elite female athletes.

    PubMed

    Nunes, João A; Crewther, Blair T; Ugrinowitsch, Carlos; Tricoli, Valmor; Viveiros, Luís; de Rose, Dante; Aoki, Marcelo S

    2011-08-01

    This study examined the salivary hormone and immune responses of elite female athletes to 3 different resistance exercise schemes. Fourteen female basketball players each performed an endurance scheme (ES-4 sets of 12 reps, 60% of 1 repetition maximum (1RM) load, 1-minute rest periods), a strength-hypertrophy scheme (SHS-1 set of 5RM, 1 set of 4RM, 1 set of 3RM, 1 set of 2RM, and 1set of 1RM with 3-minute rest periods, followed by 3 sets of 10RM with 2-minute rest periods) and a power scheme (PS-3 sets of 10 reps, 50% 1RM load, 3-minute rest periods) using the same exercises (bench press, squat, and biceps curl). Saliva samples were collected at 07:30 hours, pre-exercise (Pre) at 09:30 hours, postexercise (Post), and at 17:30 hours. Matching samples were also taken on a nonexercising control day. The samples were analyzed for testosterone, cortisol (C), and immunoglobulin A concentrations. The total volume of load lifted differed among the 3 schemes (SHS > ES > PS, p < 0.05). Postexercise C concentrations increased after all schemes, compared to control values (p < 0.05). In the SHS, the postexercise C response was also greater than pre-exercise data (p < 0.05). The current findings confirm that high-volume resistance exercise schemes can stimulate greater C secretion because of higher metabolic demand. In terms of practical applications, acute changes in C may be used to evaluate the metabolic demands of different resistance exercise schemes, or as a tool for monitoring training strain.

  2. Association analysis of the SLC22A11 (organic anion transporter 4) and SLC22A12 (urate transporter 1) urate transporter locus with gout in New Zealand case-control sample sets reveals multiple ancestral-specific effects

    PubMed Central

    2013-01-01

    Introduction There is inconsistent association between urate transporters SLC22A11 (organic anion transporter 4 (OAT4)) and SLC22A12 (urate transporter 1 (URAT1)) and risk of gout. New Zealand (NZ) Māori and Pacific Island people have higher serum urate and more severe gout than European people. The aim of this study was to test genetic variation across the SLC22A11/SLC22A12 locus for association with risk of gout in NZ sample sets. Methods A total of 12 single nucleotide polymorphism (SNP) variants in four haplotype blocks were genotyped using TaqMan® and Sequenom MassArray in 1003 gout cases and 1156 controls. All cases had gout according to the 1977 American Rheumatism Association criteria. Association analysis of single markers and haplotypes was performed using PLINK and Stata. Results A haplotype block 1 SNP (rs17299124) (upstream of SLC22A11) was associated with gout in less admixed Polynesian sample sets, but not European Caucasian (odds ratio; OR = 3.38, P = 6.1 × 10-4; OR = 0.91, P = 0.40, respectively) sample sets. A protective block 1 haplotype caused the rs17299124 association (OR = 0.28, P = 6.0 × 10-4). Within haplotype block 2 (SLC22A11) we could not replicate previous reports of association of rs2078267 with gout in European Caucasian (OR = 0.98, P = 0.82) sample sets, however this SNP was associated with gout in Polynesian (OR = 1.51, P = 0.022) sample sets. Within haplotype block 3 (including SLC22A12) analysis of haplotypes revealed a haplotype with trans-ancestral protective effects (OR = 0.80, P = 0.004), and a second haplotype conferring protection in less admixed Polynesian sample sets (OR = 0.63, P = 0.028) but risk in European Caucasian samples (OR = 1.33, P = 0.039). Conclusions Our analysis provides evidence for multiple ancestral-specific effects across the SLC22A11/SLC22A12 locus that presumably influence the activity of OAT4 and URAT1 and risk of gout. Further fine mapping of the association signal is needed using trans-ancestral re-sequence data. PMID:24360580

  3. A Phylogenomic Approach Based on PCR Target Enrichment and High Throughput Sequencing: Resolving the Diversity within the South American Species of Bartsia L. (Orobanchaceae)

    PubMed Central

    Tank, David C.

    2016-01-01

    Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929

  4. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Gene-environment interaction in the etiology of mathematical ability using SNP sets.

    PubMed

    Docherty, Sophia J; Kovas, Yulia; Plomin, Robert

    2011-01-01

    Mathematics ability and disability is as heritable as other cognitive abilities and disabilities, however its genetic etiology has received relatively little attention. In our recent genome-wide association study of mathematical ability in 10-year-old children, 10 SNP associations were nominated from scans of pooled DNA and validated in an individually genotyped sample. In this paper, we use a 'SNP set' composite of these 10 SNPs to investigate gene-environment (GE) interaction, examining whether the association between the 10-SNP set and mathematical ability differs as a function of ten environmental measures in the home and school in a sample of 1888 children with complete data. We found two significant GE interactions for environmental measures in the home and the school both in the direction of the diathesis-stress type of GE interaction: The 10-SNP set was more strongly associated with mathematical ability in chaotic homes and when parents are negative.

  6. Comparison of pH and refractometry index with calcium concentrations in preparturient mammary gland secretions of mares.

    PubMed

    Korosue, Kenji; Murase, Harutaka; Sato, Fumio; Ishimaru, Mutsuki; Kotoyori, Yasumitsu; Tsujimura, Koji; Nambo, Yasuo

    2013-01-15

    To test the usefulness of measuring pH and refractometry index, compared with measuring calcium carbonate concentration, of preparturient mammary gland secretions for predicting parturition in mares. Evaluation study. 27 pregnant Thoroughbred mares. Preparturient mammary gland secretion samples were obtained once or twice daily 10 days prior to foaling until parturition. The samples were analyzed for calcium carbonate concentration with a water hardness kit (151 samples), pH with pH test paper (222 samples), and refractometry index with a Brix refractometer (214 samples). The sensitivity, specificity, and positive and negative predictive values for each test were calculated for evaluation of predicting parturition. The PPV within 72 hours and the NPV within 24 hours for calcium carbonate concentration determination (standard value set to 400 μg/g) were 93.8% and 98.3%, respectively. The PPV within 72 hours and the NPV within 24 hours for the pH test (standard value set at 6.4) were 97.9% and 99.4%, respectively. The PPV within 72 hours and the NPV within 24 hours for the Brix test (standard value set to 20%) were 73.2% and 96.5%, respectively. Results suggested that the pH test with the standard value set at a pH of 6.4 would be useful in the management of preparturient mares by predicting when mares are not ready to foal. This was accomplished with equal effectiveness of measuring calcium carbonate concentration with a water hardness kit.

  7. Methodological framework for projecting the potential loss of intraspecific genetic diversity due to global climate change

    PubMed Central

    2012-01-01

    Background While research on the impact of global climate change (GCC) on ecosystems and species is flourishing, a fundamental component of biodiversity – molecular variation – has not yet received its due attention in such studies. Here we present a methodological framework for projecting the loss of intraspecific genetic diversity due to GCC. Methods The framework consists of multiple steps that combines 1) hierarchical genetic clustering methods to define comparable units of inference, 2) species accumulation curves (SAC) to infer sampling completeness, and 3) species distribution modelling (SDM) to project the genetic diversity loss under GCC. We suggest procedures for existing data sets as well as specifically designed studies. We illustrate the approach with two worked examples from a land snail (Trochulus villosus) and a caddisfly (Smicridea (S.) mucronata). Results Sampling completeness was diagnosed on the third coarsest haplotype clade level for T. villosus and the second coarsest for S. mucronata. For both species, a substantial species range loss was projected under the chosen climate scenario. However, despite substantial differences in data set quality concerning spatial sampling and sampling depth, no loss of haplotype clades due to GCC was predicted for either species. Conclusions The suggested approach presents a feasible method to tap the rich resources of existing phylogeographic data sets and guide the design and analysis of studies explicitly designed to estimate the impact of GCC on a currently still neglected level of biodiversity. PMID:23176586

  8. The effect of CNC and manual laser machining on electrical resistance of HDPE/MWCNT composite

    NASA Astrophysics Data System (ADS)

    Mohammadi, Fatemeh; Farshbaf Zinati, Reza; Fattahi, A. M.

    2018-05-01

    In this study, electrical conductivity of high-density polyethylene (HDPE)/multi-walled carbon nanotube (MWCNT) composite was investigated after laser machining. To this end, produced using plastic injection process, nano-composite samples were laser machined with various combinations of input parameters such as feed rate (35, 45, and 55 mm/min), feed angle with injection flow direction (0°, 45°, and 90°), and MWCNT content (0.5, 1, and 1.5 wt%). The angle between laser feed and injected flow direction was set via either of two different methods: CNC programming and manual setting. The results showed that the parameters of angle between laser line and melt flow direction and feed rate were both found to have statistically significance and physical impacts on electrical resistance of the samples in manual setting. Also, maximum conductivity was seen when the angle between laser line and melt flow direction was set to 90° in manual setting, and maximum conductivity was seen at feed rate of 55 mm/min in both of CNC programming and manual setting.

  9. Urban Background Study Webinar

    EPA Pesticide Factsheets

    This webinar presented the methodology developed for collecting a city-wide or urban area background data set, general results of southeastern cities data collected to date, and a case study that used this sampling method.

  10. NHEXAS PHASE I MARYLAND STUDY--PAHS IN AIR ANALYTICAL RESULTS

    EPA Science Inventory

    The PAHs in Air data set contains analytical results for measurements of up to 11 PAHs in 127 air samples over 51 households. Twenty-four-hour samples were taken over a one-week period using a continuous pump and solenoid apparatus pumping a standardized air volume through an UR...

  11. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--PESTICIDES IN DERMAL ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dermal Wipes data set contains analytical results for measurements of up to 8 pesticides in 86 dermal wipe samples over 86 households. Each sample was collected from the primary respondent within each household. The Dermal/Pesticide hand wipe was collected 7 d...

  12. NHEXAS PHASE I ARIZONA STUDY--METALS IN SOIL ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Soil data set contains analytical results for measurements of up to 11 metals in 551 soil samples over 392 households. Samples were taken by collecting surface soil in the yard and next to the foundation from each residence. The primary metals of interest include ...

  13. NHEXAS PHASE I ARIZONA STUDY--PESTICIDES IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dust data set contains analytical results for measurements of up to 3 pesticides in 437 dust samples over 278 households. Samples were taken by collecting dust from the indoor floor areas in the main room and in the bedroom of the primary resident. The primary...

  14. Data Interpretation: Using Probability

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2011-01-01

    Experimental data are analysed statistically to allow researchers to draw conclusions from a limited set of measurements. The hard fact is that researchers can never be certain that measurements from a sample will exactly reflect the properties of the entire group of possible candidates available to be studied (although using a sample is often the…

  15. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN WATER ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Water data set contains analytical results for measurements of up to 10 pesticides in 388 water samples over 80 households. One-liter samples of tap water were collected after a two-minute flush from the tap identified by the resident as that most commonly used...

  16. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN WATER ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Water data set contains analytical results for measurements of up to 11 metals in 98 water samples over 61 households. Sample collection was undertaken at the tap and any additional drinking water source used extensively within each residence. The primary metals o...

  17. NHEXAS PHASE I MARYLAND STUDY--METALS IN AIR ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Air data set contains analytical results for measurements of up to 4 metals in 458 air samples over 79 households. Twenty-four-hour samples were taken over a one-week period using a continuous pump and solenoid apparatus by pumping a standardized air volume through...

  18. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN FOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Duplicate Diet Food data set contains analytical results for measurements of up to 10 pesticides in 682 food samples over 80 households. Each sample was collected as a duplicate of the food consumed by the primary respondent during a four-day period commencing ...

  19. NHEXAS PHASE I MARYLAND STUDY--METALS IN FOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Duplicate Diet Food data set contains analytical results for measurements of up to 11 metals in 773 food samples over 80 households. Each sample was collected as a duplicate of the food consumed by the primary respondent during a four-day period commencing with the...

  20. NHEXAS PHASE I MARYLAND STUDY--PESTICIDES IN AIR ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Air data set contains analytical results for measurements of up to 9 pesticides in 127 air samples over 51 households. Samples were taken by pumping standardized air volumes through URG impactors with a 10 um cutpoint and polyurethane foam (PUF) filters at indo...

  1. Maintaining Equivalent Cut Scores for Small Sample Test Forms

    ERIC Educational Resources Information Center

    Dwyer, Andrew C.

    2016-01-01

    This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…

  2. CTEPP-OH DATA ANALYTICAL RESULTS ORGANIZED BY CHEMICAL AND MEDIA

    EPA Science Inventory

    This data set contains the field sample data by chemical and matrix for CTEPP-OH. The data is organized at the sample, chemical level.

    The Children’s Total Exposure to Persistent Pesticides and Other Persistent Pollutant (CTEPP) study was one of the largest aggregate exposure ...

  3. Training Objectives, Transfer, Validation and Evaluation: A Sri Lankan Study

    ERIC Educational Resources Information Center

    Wickramasinghe, Vathsala M.

    2006-01-01

    Using a stratified random sample, this paper examines the training practices of setting objectives, transfer, validation and evaluation in Sri Lanka. The paper further sets out to compare those practices across local, foreign and joint-venture companies based on the assumption that there may be significant differences across companies of different…

  4. Salient Predictors of School Dropout among Secondary Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Doren, Bonnie; Murray, Christopher; Gau, Jeff M.

    2014-01-01

    The purpose of this study was to identify the unique contributions of a comprehensive set of predictors and the most salient predictors of school dropout among a nationally representative sample of students with learning disabilities (LD). A comprehensive set of theoretically and empirically relevant factors was selected for examination. Analyses…

  5. In-Service Preschool Teachers' Thoughts about Technology and Technology Use in Early Educational Settings

    ERIC Educational Resources Information Center

    Kara, Nuri; Cagiltay, Kursat

    2017-01-01

    The purpose of this study is to understand in-service preschool teachers' thoughts about technology and technology use in early educational settings. Semi-structured interviews were conducted with 18 in-service preschool teachers. These teachers were selected from public and private preschools. Convenient sampling was applied because teachers who…

  6. The Impact of Problem Sets on Student Learning

    ERIC Educational Resources Information Center

    Kim, Myeong Hwan; Cho, Moon-Heum; Leonard, Karen Moustafa

    2012-01-01

    The authors examined the role of problem sets on student learning in university microeconomics. A total of 126 students participated in the study in consecutive years. independent samples t test showed that students who were not given answer keys outperformed students who were given answer keys. Multiple regression analysis showed that, along with…

  7. High pressure system for 3-D study of elastic anisotropy

    NASA Astrophysics Data System (ADS)

    Lokajicek, T.; Pros, Z.; Klima, K.

    2003-04-01

    New high pressure system was designed for the study of elastic anisotropy of condensed matter under high confining pressure up to 700 MPa. Simultaneously could be measured dynamic and static parameters: a) dynamic parameters by ultrasonic sounding, b) static parameters by measuring of spherical sample deformation. The measurement is carried out on spherical samples diameter 50 +/- 0.01 mm. Higher value of confining pressure was reached due to the new construction of sample positioning unit. The positioning unit is equipped with two Portecap step motors, which are located inside the vessel and make possible to rotate with the sphere and couple of piezoceramic transducers. Sample deformation is measured in the same direction as ultrasonic signal travel time. Only electric leads connects inner part of high pressure vessel with surrounding environment. Experimental set up enables: - simultaneous P-wave ultrasonic sounding, - measurement of current sample deformation at sounding points, - measurement of current value of confining pressure and - measurement of current stress media temperature. Air driven high pressure pump Haskel is used to produce high value of confining pressure up to 700 MPa. Ultrasonic signals are recorded by digital scope Agilent 54562 with sampling frequency 100 MHz. Control and measuring software was developed under Agilent VEE software environment working under MS Win 2000 operating system. Measuring set up was tested by measurement of monomineral spherical samples of quartz and corundum. Both of them have trigonal symmetry. The measurement showed that the P-wave velocity range of quartz was between 5.7-7.0 km/sec. and velocity range of corundum was between 9.7-10.9 km/sec. High pressure resistant LVDT transducers Mesing together with Intronix electronic unit were used to monitor sample deformation. Sample deformation is monitored with the accuracy of 0.1 micron. All test measurements proved the good accuracy of the whole measuring set up. This project was supported by Grant Agency of the Czech Republic No.: 205/01/1430.

  8. Remote temperature-set-point controller

    DOEpatents

    Burke, W.F.; Winiecki, A.L.

    1984-10-17

    An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.

  9. Remote temperature-set-point controller

    DOEpatents

    Burke, William F.; Winiecki, Alan L.

    1986-01-01

    An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.

  10. Inhibition of Sodium Benzoate on Stainless Steel in Tropical Seawater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seoh, S. Y.; Senin, H. B.; Nik, W. N. Wan

    2007-05-09

    The inhibition of sodium benzoate for stainless steel controlling corrosion was studied in seawater at room temperature. Three sets of sample have been immersed in seawater containing sodium benzoate with the concentrations of 0.3M, 0.6M and 1.0M respectively. One set of sample has been immersed in seawater without adding any sodium benzoate. It was found that the highest corrosion rate was observed for the stainless steel with no inhibitor was added to the seawater. As the concentration of sodium benzoate being increased, the corrosion rate is decreases. Results show that by the addition of 1.0M of sodium benzoate in seawatermore » samples, it giving {>=} 90% efficiencies.« less

  11. Eradication of Cytomegalovirus from Human Milk by Microwave Irradiation: A Pilot Study.

    PubMed

    Ben-Shoshan, Moshe; Mandel, Dror; Lubetzky, Ronit; Dollberg, Shaul; Mimouni, Francis B

    2016-05-01

    Cytomegalovirus (CMV)-infected human milk (HM) can lead to significant CMV morbidity and mortality in preterm very-low-birth weight infants. The eradication of CMV in HM while preserving its properties poses a major clinical challenge. We aimed to compare two methods used to neutralize the virus in HM, one recognized as partially effective (freezing) and another not tested to date (microwave exposure). We sampled HM from 31 CMV-seropositive mothers whose infants were hospitalized at the Lis Maternity Hospital. Fifteen samples that were positive for CMV antigen were divided into five 5 mL aliquots: the first a control, the second was frozen at -20°C for 1 day, the third was frozen at -200°C for 3 days, and the fourth and fifth aliquots were exposed for 30 seconds to microwave radiation at a low-power setting (500 W) and high-power setting (750 W), respectively. Only microwave radiation at a high-power setting led to complete neutralization of CMV in all samples. Low-power microwave irradiation had a 13% failure rate while 3-day freezing and 1-day freezing had failure rates of 7% and 20%, respectively. It is possible to eradicate CMV successfully in HM by using microwave radiation at a high-power setting. Further studies are needed to evaluate the effect of microwave heating on breast milk properties.

  12. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  13. Using the Kannada version of the Connor Davidson Resilience Scale to assess resilience and its relationship with psychological distress among adolescent girls in Bangalore, India.

    PubMed

    Sidheek, K P Fasli; Satyanarayana, Veena A; Sowmya, H R; Chandra, Prabha S

    2017-12-01

    A widely used and accepted scale for assessing resilience is the Connor-Davidson Resilience Scale (CD-RISC). The aim of the present study was to establish the psychometric properties of the Kannada version of the scale and assess the relationship between resilience and psychological distress in a sample of adolescent girls living in low-income settings. Data was obtained from a sample of 606 adolescent girls studying in a college meant for women from a socio-economically disadvantaged setting. The CD- RISC (25 item) was used to assess resilience and Kessler Psychological Distress Scale (K10) was used to assess psychological distress. Exploratory factor analysis yielded four stable factors instead of the original five factors. Similar results have been obtained in other factor-analytic studies. A significant negative correlation was found between psychological distress and resilience. Our study shows that the CD-RISC is a valuable measure to assess resilience among adolescents in low-income settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Photothermal technique in cell microscopy studies

    NASA Astrophysics Data System (ADS)

    Lapotko, Dmitry; Chebot'ko, Igor; Kutchinsky, Georgy; Cherenkevitch, Sergey

    1995-01-01

    Photothermal (PT) method is applied for a cell imaging and quantitative studies. The techniques for cell monitoring, imaging and cell viability test are developed. The method and experimental set up for optical and PT-image acquisition and analysis is described. Dual- pulsed laser set up combined with phase contrast illumination of a sample provides visualization of temperature field or absorption structure of a sample with spatial resolution 0.5 micrometers . The experimental optics, hardware and software are designed using the modular principle, so the whole set up can be adjusted for various experiments: PT-response monitoring or photothermal spectroscopy studies. Sensitivity of PT-method provides the imaging of the structural elements of live (non-stained) white blood cells. The results of experiments with normal and subnormal blood cells (red blood cells, lymphocytes, neutrophyles and lymphoblasts) are reported. Obtained PT-images are different from optical analogs and deliver additional information about cell structure. The quantitative analysis of images was used for cell population comparative diagnostic. The viability test for red blood cell differentiation is described. During the study of neutrophyles in norma and sarcoidosis disease the differences in PT-images of cells were found.

  15. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification

    PubMed Central

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-01-01

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination. PMID:24957028

  16. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification.

    PubMed

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-05-09

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination.

  17. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  18. Covariant information-density cutoff in curved space-time.

    PubMed

    Kempf, Achim

    2004-06-04

    In information theory, the link between continuous information and discrete information is established through well-known sampling theorems. Sampling theory explains, for example, how frequency-filtered music signals are reconstructible perfectly from discrete samples. In this Letter, sampling theory is generalized to pseudo-Riemannian manifolds. This provides a new set of mathematical tools for the study of space-time at the Planck scale: theories formulated on a differentiable space-time manifold can be equivalent to lattice theories. There is a close connection to generalized uncertainty relations which have appeared in string theory and other studies of quantum gravity.

  19. Counselor Educators' Perceptions of Working with Students Who Are Unwilling to Set Aside Their Religious Beliefs When Counseling Clients: A Qualitative Study

    ERIC Educational Resources Information Center

    Saussaye, Michael G.

    2012-01-01

    The purpose of this qualitative study was to explore counselor educators' perceptions of working with students unwilling to set aside their personal religious beliefs while counseling clients. Purposeful sampling was used in a snowball fashion to select participants with a minimum of one year experience as a counselor educator and who are…

  20. Cross-Study Homogeneity of Psoriasis Gene Expression in Skin across a Large Expression Range

    PubMed Central

    Kerkof, Keith; Timour, Martin; Russell, Christopher B.

    2013-01-01

    Background In psoriasis, only limited overlap between sets of genes identified as differentially expressed (psoriatic lesional vs. psoriatic non-lesional) was found using statistical and fold-change cut-offs. To provide a framework for utilizing prior psoriasis data sets we sought to understand the consistency of those sets. Methodology/Principal Findings Microarray expression profiling and qRT-PCR were used to characterize gene expression in PP and PN skin from psoriasis patients. cDNA (three new data sets) and cRNA hybridization (four existing data sets) data were compared using a common analysis pipeline. Agreement between data sets was assessed using varying qualitative and quantitative cut-offs to generate a DEG list in a source data set and then using other data sets to validate the list. Concordance increased from 67% across all probe sets to over 99% across more than 10,000 probe sets when statistical filters were employed. The fold-change behavior of individual genes tended to be consistent across the multiple data sets. We found that genes with <2-fold change values were quantitatively reproducible between pairs of data-sets. In a subset of transcripts with a role in inflammation changes detected by microarray were confirmed by qRT-PCR with high concordance. For transcripts with both PN and PP levels within the microarray dynamic range, microarray and qRT-PCR were quantitatively reproducible, including minimal fold-changes in IL13, TNFSF11, and TNFRSF11B and genes with >10-fold changes in either direction such as CHRM3, IL12B and IFNG. Conclusions/Significance Gene expression changes in psoriatic lesions were consistent across different studies, despite differences in patient selection, sample handling, and microarray platforms but between-study comparisons showed stronger agreement within than between platforms. We could use cut-offs as low as log10(ratio) = 0.1 (fold-change = 1.26), generating larger gene lists that validate on independent data sets. The reproducibility of PP signatures across data sets suggests that different sample sets can be productively compared. PMID:23308107

  1. Set Shifting Among Adolescents with Anorexia Nervosa

    PubMed Central

    Fitzpatrick, Kathleen Kara; Darcy, Alison; Colborn, Danielle; Gudorf, Caroline; Lock, James

    2012-01-01

    Objective Set shifting difficulties are documented for adults with anorexia nervosa (AN). However, AN typically onsets in adolescents and it is unclear if set-shifting difficulties are a result of chronic AN or present earlier in its course. This study examined whether adolescents with short duration AN demonstrated set shifting difficulties compared to healthy controls (HC). Method Data on set shifting collected from the Delis-Kaplan Executive Functioning System (DKEFS) and Wisconsin Card Sort Task (WCST) as well as eating psychopathology were collected from 32 adolescent inpatients with AN and compared to those from 22 HCs. Results There were no differences in set-shifting in adolescents with AN compared to HCs on most measures. Conclusion The findings suggest that set-shifting difficulties in AN may be a consequence of AN. Future studies should explore set-shifting difficulties in a larger sample of adolescents with the AN to determine if there is sub-set of adolescents with these difficulties and determine any relationship of set-shifting to the development of a chronic from of AN. PMID:22692985

  2. [Tobacco quality analysis of industrial classification of different years using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2012-11-01

    In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.

  3. Imbalanced learning for pattern recognition: an empirical study

    NASA Astrophysics Data System (ADS)

    He, Haibo; Chen, Sheng; Man, Hong; Desai, Sachi; Quoraishee, Shafik

    2010-10-01

    The imbalanced learning problem (learning from imbalanced data) presents a significant new challenge to the pattern recognition and machine learning society because in most instances real-world data is imbalanced. When considering military applications, the imbalanced learning problem becomes much more critical because such skewed distributions normally carry the most interesting and critical information. This critical information is necessary to support the decision-making process in battlefield scenarios, such as anomaly or intrusion detection. The fundamental issue with imbalanced learning is the ability of imbalanced data to compromise the performance of standard learning algorithms, which assume balanced class distributions or equal misclassification penalty costs. Therefore, when presented with complex imbalanced data sets these algorithms may not be able to properly represent the distributive characteristics of the data. In this paper we present an empirical study of several popular imbalanced learning algorithms on an army relevant data set. Specifically we will conduct various experiments with SMOTE (Synthetic Minority Over-Sampling Technique), ADASYN (Adaptive Synthetic Sampling), SMOTEBoost (Synthetic Minority Over-Sampling in Boosting), and AdaCost (Misclassification Cost-Sensitive Boosting method) schemes. Detailed experimental settings and simulation results are presented in this work, and a brief discussion of future research opportunities/challenges is also presented.

  4. Intraosseous samples can be used for creatinine measurements--an experimental study in the anaesthetised pig.

    PubMed

    Strandberg, Gunnar; Lipcsey, Miklós; Eriksson, Mats; Larsson, Anders

    2014-01-01

    Intraosseous (IO) access is a valuable tool in prehospital locations and in emergency departments when other forms of vascular access are unavailable. Creatinine is often used for dose adjustment of drugs that may be administered through intraosseous cannulae. We aimed to study the possibility of analysing creatinine in intraosseous samples and study the accuracy and precision of such measurements. Eight pigs with endotoxin induced septic shock were sampled hourly for six hours and analysed for plasma creatinine. Samples were collected from arterial, venous, and IO cannulae. There was an increase in creatinine values during the later part of the experiment. The coefficients of variation between the three sampling sites were less than 10% at all sampling times. Based on our findings intraosseous samples can be used for creatinine determination in emergency settings.

  5. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less

  6. Comparison of Collection Methods for Fecal Samples in Microbiome Studies

    PubMed Central

    Vogtmann, Emily; Chen, Jun; Amir, Amnon; Shi, Jianxin; Abnet, Christian C.; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi

    2017-01-01

    Prospective cohort studies are needed to assess the relationship between the fecal microbiome and human health and disease. To evaluate fecal collection methods, we determined technical reproducibility, stability at ambient temperature, and accuracy of 5 fecal collection methods (no additive, 95% ethanol, RNAlater Stabilization Solution, fecal occult blood test cards, and fecal immunochemical test tubes). Fifty-two healthy volunteers provided fecal samples at the Mayo Clinic in Rochester, Minnesota, in 2014. One set from each sample collection method was frozen immediately, and a second set was incubated at room temperature for 96 hours and then frozen. Intraclass correlation coefficients (ICCs) were calculated for the relative abundance of 3 phyla, 2 alpha diversity metrics, and 4 beta diversity metrics. Technical reproducibility was high, with ICCs for duplicate fecal samples between 0.64 and 1.00. Stability for most methods was generally high, although the ICCs were below 0.60 for 95% ethanol in metrics that were more sensitive to relative abundance. When compared with fecal samples that were frozen immediately, the ICCs were below 0.60 for the metrics that were sensitive to relative abundance; however, the remaining 2 alpha diversity and 3 beta diversity metrics were all relatively accurate, with ICCs above 0.60. In conclusion, all fecal sample collection methods appear relatively reproducible, stable, and accurate. Future studies could use these collection methods for microbiome analyses. PMID:27986704

  7. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    PubMed

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

  8. Recommendations for the use of mist nets for inventory and monitoring of bird populations

    Treesearch

    C. John Ralph; Erica H. Dunn; Will J. Peach; Colleen M. Handel

    2004-01-01

    We provide recommendations on the best practices for mist netting for the purposes of monitoring population parameters such as abundance and demography. Studies should be carefully thought out before nets are set up, to ensure that sampling design and estimated sample size will allow study objectives to be met. Station location, number of nets, type of nets, net...

  9. Growth after partial cutting of ponderosa pine on permanent sample plots in eastern Oregon.

    Treesearch

    Edwin L. Mowat

    1961-01-01

    Between the years 1913 and 1938, seven sets of permanent sample plots were established on the Whitman, Malheur, Rogue River, and Deschutes National Forests in eastern and central Oregon to study the results of various methods of selection cutting in old-growth ponderosa pine stands. This report briefly describes these studies and gives statistics on board-foot growth...

  10. The quality of care in occupational therapy: an assessment of selected Michigan hospitals.

    PubMed

    Kirchman, M M

    1979-07-01

    In this study, a methodology was developed and tested for assessing the quality of care in occupational therapy between educational and noneducational clinical settings, as measured by process and outcome. An instrument was constructed for an external audit of the hospital record. Standards drafted by the investigator were established as normative by a panel of experts for use in judging the programs. Hospital records of 84 patients with residual hemiparesis or hemiplegia in three noneducational settings and of 100 patients with similar diagnoses in two educational clinical settings from selected Michigan facilities were chosen by proportionate stratified random sampling. The process study showed that occupational therapy was of significantly higher quality in the educational settings. The outcome study did not show significant differences between types of settings. Implications for education and practice are discussed.

  11. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  12. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  13. Breast Reference Set Application: Chris Li-FHCRC (2014) — EDRN Public Portal

    Cancer.gov

    This application proposes to use Reference Set #1. We request access to serum samples collected at the time of breast biopsy from subjects with IC (n=30) or benign disease without atypia (n=30). Statistical power: With 30 BC cases and 30 normal controls, a 25% difference in mean metabolite levels can be detected between groups with 80% power and α=0.05, assuming coefficients of variation of 30%, consistent with our past studies. These sample sizes appear sufficient to enable detection of changes similar in magnitude to those previously reported in pre-clinical (BC recurrence) specimens (20).

  14. Breast Reference Set Application: Karen Abbott- University of Arkansas (2013) — EDRN Public Portal

    Cancer.gov

    We are evaluating whether detection of a tumor-specific N-linked glycosylation known as B 1,6 branched N-glycan present on the glycoprotein periostin in breast cancer will be useful as a new biomarker for the detection of breast cancer in patient plasma and serum. We have completed an initial study using samples with known inavasive ductal breast carcinoma diagnosis and the results look very promising. Therefore, we would like to proceed with our analysis of this potential biomarker for breast cancer diagnosis by analyzing the blinded samples in breast reference set 1.

  15. Environmental settings of streams sampled for mercury in New York and South Carolina, 2005-09

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Smith, Martyn J.; Bradley, Paul M.; Button, Daniel T.; Clark, Jimmy M.; Burns, Douglas A.; Journey, Celeste A.

    2011-01-01

    This report summarizes the environmental settings of streams in New York and South Carolina, where the U.S. Geological Survey completed detailed investigations during 2005-09 into factors contributing to mercury bioaccumulation in top-predator fish and other stream organisms. Descriptions of location, land use/land cover, climate, precipitation, atmospheric deposition, hydrology, water temperature, and other characteristics are provided. Atmospheric deposition is the dominant mercury source in the studied basins where biota, sediment, soil, and water were sampled for mercury and for physical and chemical characteristics believed to be important in mercury methylation and transport.

  16. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens.

    PubMed

    Haab, Brian B; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L; Sen, Subrata; Hollingsworth, Michael A; Rinaudo, Jo Ann; Killary, Ann M; Brand, Randall E

    2015-01-01

    The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19-9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19-9 in early-stage pancreatic cancer and the control groups. CA 19-9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70-74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19-9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9-9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19-9 data presented here will be useful for benchmarking and for exploring relationships to CA 19-9.

  17. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens

    PubMed Central

    Haab, Brian B.; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L.; Sen, Subrata; Hollingsworth, Michael A.; Rinaudo, Jo Ann; Killary, Ann M.; Brand, Randall E.

    2015-01-01

    The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19–9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19–9 in early-stage pancreatic cancer and the control groups. CA 19–9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70–74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19–9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9–9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19–9 data presented here will be useful for benchmarking and for exploring relationships to CA 19–9. PMID:26431551

  18. PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets

    PubMed Central

    Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.

    2016-01-01

    Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601

  19. PCAN: Probabilistic correlation analysis of two non-normal data sets.

    PubMed

    Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J

    2016-12-01

    Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.

  20. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  1. Supplementing electronic health records through sample collection and patient diaries: A study set within a primary care research database.

    PubMed

    Joseph, Rebecca M; Soames, Jamie; Wright, Mark; Sultana, Kirin; van Staa, Tjeerd P; Dixon, William G

    2018-02-01

    To describe a novel observational study that supplemented primary care electronic health record (EHR) data with sample collection and patient diaries. The study was set in primary care in England. A list of 3974 potentially eligible patients was compiled using data from the Clinical Practice Research Datalink. Interested general practices opted into the study then confirmed patient suitability and sent out postal invitations. Participants completed a drug-use diary and provided saliva samples to the research team to combine with EHR data. Of 252 practices contacted to participate, 66 (26%) mailed invitations to patients. Of the 3974 potentially eligible patients, 859 (22%) were at participating practices, and 526 (13%) were sent invitations. Of those invited, 117 (22%) consented to participate of whom 86 (74%) completed the study. We have confirmed the feasibility of supplementing EHR with data collected directly from patients. Although the present study successfully collected essential data from patients, it also underlined the requirement for improved engagement with both patients and general practitioners to support similar studies. © 2017 The Authors. Pharmacoepidemiology & Drug Safety published by John Wiley & Sons Ltd.

  2. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nee, K.; Bryan, S.; Levitskaia, T.

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

  3. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE PAGES

    Nee, K.; Bryan, S.; Levitskaia, T.; ...

    2017-12-28

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

  4. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  5. A software suite for the generation and comparison of peptide arrays from sets of data collected by liquid chromatography-mass spectrometry.

    PubMed

    Li, Xiao-jun; Yi, Eugene C; Kemp, Christopher J; Zhang, Hui; Aebersold, Ruedi

    2005-09-01

    There is an increasing interest in the quantitative proteomic measurement of the protein contents of substantially similar biological samples, e.g. for the analysis of cellular response to perturbations over time or for the discovery of protein biomarkers from clinical samples. Technical limitations of current proteomic platforms such as limited reproducibility and low throughput make this a challenging task. A new LC-MS-based platform is able to generate complex peptide patterns from the analysis of proteolyzed protein samples at high throughput and represents a promising approach for quantitative proteomics. A crucial component of the LC-MS approach is the accurate evaluation of the abundance of detected peptides over many samples and the identification of peptide features that can stratify samples with respect to their genetic, physiological, or environmental origins. We present here a new software suite, SpecArray, that generates a peptide versus sample array from a set of LC-MS data. A peptide array stores the relative abundance of thousands of peptide features in many samples and is in a format identical to that of a gene expression microarray. A peptide array can be subjected to an unsupervised clustering analysis to stratify samples or to a discriminant analysis to identify discriminatory peptide features. We applied the SpecArray to analyze two sets of LC-MS data: one was from four repeat LC-MS analyses of the same glycopeptide sample, and another was from LC-MS analysis of serum samples of five male and five female mice. We demonstrate through these two study cases that the SpecArray software suite can serve as an effective software platform in the LC-MS approach for quantitative proteomics.

  6. Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.

    PubMed

    Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver

    2018-06-20

    Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  7. PTM Modeling of Dredged Suspended Sediment at Proposed Polaris Point and Ship Repair Facility CVN Berthing Sites - Apra Harbor, Guam

    DTIC Science & Technology

    2017-09-01

    ADCP locations used for model calibration. ......................................................................... 12 Figure 4-3. Sample water...Example of fine sediment sample [Set d, Sample B30]. (B) Example of coarse sediment sample [Set d, sample B05...Turning Basin average sediment size distribution curve. ................................................... 21 Figure 5-5. Turning Basin average size

  8. Developing an Apicomplexan DNA Barcoding System to Detect Blood Parasites of Small Coral Reef Fishes.

    PubMed

    Renoux, Lance P; Dolan, Maureen C; Cook, Courtney A; Smit, Nico J; Sikkel, Paul C

    2017-08-01

    Apicomplexan parasites are obligate parasites of many species of vertebrates. To date, there is very limited understanding of these parasites in the most-diverse group of vertebrates, actinopterygian fishes. While DNA barcoding targeting the eukaryotic 18S small subunit rRNA gene sequence has been useful in identifying apicomplexans in tetrapods, identification of apicomplexans infecting fishes has relied solely on morphological identification by microscopy. In this study, a DNA barcoding method was developed that targets the 18S rRNA gene primers for identifying apicomplexans parasitizing certain actinopterygian fishes. A lead primer set was selected showing no cross-reactivity to the overwhelming abundant host DNA and successfully confirmed 37 of the 41 (90.2%) microscopically verified parasitized fish blood samples analyzed in this study. Furthermore, this DNA barcoding method identified 4 additional samples that screened negative for parasitemia, suggesting this molecular method may provide improved sensitivity over morphological characterization by microscopy. In addition, this PCR screening method for fish apicomplexans, using Whatman FTA preserved DNA, was tested in efforts leading to a more simplified field collection, transport, and sample storage method as well as a streamlining sample processing important for DNA barcoding of large sample sets.

  9. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  10. Protein and glycomic plasma markers for early detection of adenoma and colon cancer.

    PubMed

    Rho, Jung-Hyun; Ladd, Jon J; Li, Christopher I; Potter, John D; Zhang, Yuzheng; Shelley, David; Shibata, David; Coppola, Domenico; Yamada, Hiroyuki; Toyoda, Hidenori; Tada, Toshifumi; Kumada, Takashi; Brenner, Dean E; Hanash, Samir M; Lampe, Paul D

    2018-03-01

    To discover and confirm blood-based colon cancer early-detection markers. We created a high-density antibody microarray to detect differences in protein levels in plasma from individuals diagnosed with colon cancer <3 years after blood was drawn (ie, prediagnostic) and cancer-free, matched controls. Potential markers were tested on plasma samples from people diagnosed with adenoma or cancer, compared with controls. Components of an optimal 5-marker panel were tested via immunoblotting using a third sample set, Luminex assay in a large fourth sample set and immunohistochemistry (IHC) on tissue microarrays. In the prediagnostic samples, we found 78 significantly (t-test) increased proteins, 32 of which were confirmed in the diagnostic samples. From these 32, optimal 4-marker panels of BAG family molecular chaperone regulator 4 (BAG4), interleukin-6 receptor subunit beta (IL6ST), von Willebrand factor (VWF) and CD44 or epidermal growth factor receptor (EGFR) were established. Each panel member and the panels also showed increases in the diagnostic adenoma and cancer samples in independent third and fourth sample sets via immunoblot and Luminex, respectively. IHC results showed increased levels of BAG4, IL6ST and CD44 in adenoma and cancer tissues. Inclusion of EGFR and CD44 sialyl Lewis-A and Lewis-X content increased the panel performance. The protein/glycoprotein panel was statistically significantly higher in colon cancer samples, characterised by a range of area under the curves from 0.90 (95% CI 0.82 to 0.98) to 0.86 (95% CI 0.83 to 0.88), for the larger second and fourth sets, respectively. A panel including BAG4, IL6ST, VWF, EGFR and CD44 protein/glycomics performed well for detection of early stages of colon cancer and should be further examined in larger studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Seeking Signs of Life on Mars: The Importance of Sedimentary Suites as Part of Mars Sample Return

    NASA Astrophysics Data System (ADS)

    iMOST Team; Mangold, N.; McLennan, S. M.; Czaja, A. D.; Ori, G. G.; Tosca, N. J.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Carrier, B. L.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Hausrath, E. M.; Herd, C. D. K.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; McCoy, J. T.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Usui, T.; Van Kranendonk, M. J.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.

    2018-04-01

    Sedimentary, and especially lacustrine, depositional environments are high-priority geological/astrobiological settings for Mars Sample Return. We review the detailed investigations, measurements, and sample types required to evaluate such settings.

  12. Statistical analysis of environmental monitoring data: does a worst case time for monitoring clean rooms exist?

    PubMed

    Cundell, A M; Bean, R; Massimore, L; Maier, C

    1998-01-01

    To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.

  13. Automatic activation of alcohol cues by child maltreatment related words: a replication attempt in a different treatment setting.

    PubMed

    Potthast, Nadine; Neuner, Frank; Catani, Claudia

    2017-01-03

    A growing body of research attempts to clarify the underlying mechanisms of the association between emotional maltreatment and alcohol dependence (AD). In a preceding study, we found considerable support for a specific priming effect in subjects with AD and emotional abuse experiences receiving alcohol rehabilitation treatment. We concluded that maltreatment related cues can automatically activate an associative memory network comprising cues eliciting craving as well as alcohol-related responses. Generalizability of the results to other treatment settings remains unclear because of considerable differences in German treatment settings as well as insufficiently clarified influences of selection effects. As replication studies in other settings are necessary, the current study aimed to replicate the specific priming effect in a qualified detoxification sample. 22 AD subjects (n = 10 with emotional abuse vs. n = 12 without emotional abuse) participated in a priming experiment. Comparison data from 34 healthy control subjects were derived from the prior study. Contrary to our hypothesis, we did not find a specific priming effect. We could not replicate the result of an automatic network activation by maltreatment related words in a sample of subjects with AD and emotional abuse experiences receiving qualified detoxification treatment. This discrepancy might be attributed to reasons related to treatment settings as well as to methodological limitations. Future work is required to determine the generalizability of the specific priming effect before valid conclusions regarding automatic activation can be drawn.

  14. Alcohol risk management in college settings: the safer California universities randomized trial.

    PubMed

    Saltz, Robert F; Paschall, Mallie J; McGaffigan, Richard P; Nygaard, Peter M O

    2010-12-01

    Potentially effective environmental strategies have been recommended to reduce heavy alcohol use among college students. However, studies to date on environmental prevention strategies are few in number and have been limited by their nonexperimental designs, inadequate sample sizes, and lack of attention to settings where the majority of heavy drinking events occur. To determine whether environmental prevention strategies targeting off-campus settings would reduce the likelihood and incidence of student intoxication at those settings. The Safer California Universities study involved 14 large public universities, half of which were assigned randomly to the Safer intervention condition after baseline data collection in 2003. Environmental interventions took place in 2005 and 2006 after 1 year of planning with seven Safer intervention universities. Random cross-sectional samples of undergraduates completed online surveys in four consecutive fall semesters (2003-2006). Campuses and communities surrounding eight campuses of the University of California and six in the California State University system were utilized. The study used random samples of undergraduates (∼500-1000 per campus per year) attending the 14 public California universities. Safer environmental interventions included nuisance party enforcement operations, minor decoy operations, driving-under-the-influence checkpoints, social host ordinances, and use of campus and local media to increase the visibility of environmental strategies. Proportion of drinking occasions in which students drank to intoxication at six different settings during the fall semester (residence hall party, campus event, fraternity or sorority party, party at off-campus apartment or house, bar/restaurant, outdoor setting), any intoxication at each setting during the semester, and whether students drank to intoxication the last time they went to each setting. Significant reductions in the incidence and likelihood of intoxication at off-campus parties and bars/restaurants were observed for Safer intervention universities compared to controls. A lower likelihood of intoxication was observed also for Safer intervention universities the last time students drank at an off-campus party (OR=0.81, 95% CI=0.68, 0.97); a bar or restaurant (OR=0.76, 95% CI=0.62, 0.94); or any setting (OR=0.80, 95% CI=0.65, 0.97). No increase in intoxication (e.g., displacement) appeared in other settings. Further, stronger intervention effects were achieved at Safer universities with the highest level of implementation. Environmental prevention strategies targeting settings where the majority of heavy drinking events occur appear to be effective in reducing the incidence and likelihood of intoxication among college students. Copyright © 2010 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  15. A molecular identification system for grasses: a novel technology for forensic botany.

    PubMed

    Ward, J; Peakall, R; Gilmore, S R; Robertson, J

    2005-09-10

    Our present inability to rapidly, accurately and cost-effectively identify trace botanical evidence remains the major impediment to the routine application of forensic botany. Grasses are amongst the most likely plant species encountered as forensic trace evidence and have the potential to provide links between crime scenes and individuals or other vital crime scene information. We are designing a molecular DNA-based identification system for grasses consisting of several PCR assays that, like a traditional morphological taxonomic key, provide criteria that progressively identify an unknown grass sample to a given taxonomic rank. In a prior study of DNA sequences across 20 phylogenetically representative grass species, we identified a series of potentially informative indels in the grass mitochondrial genome. In this study we designed and tested five PCR assays spanning these indels and assessed the feasibility of these assays to aid identification of unknown grass samples. We confirmed that for our control set of 20 samples, on which the design of the PCR assays was based, the five primer combinations produced the expected results. Using these PCR assays in a 'blind test', we were able to identify 25 unknown grass samples with some restrictions. Species belonging to genera represented in our control set were all correctly identified to genus with one exception. Similarly, genera belonging to tribes in the control set were correctly identified to the tribal level. Finally, for those samples for which neither the tribal or genus specific PCR assays were designed, we could confidently exclude these samples from belonging to certain tribes and genera. The results confirmed the utility of the PCR assays and the feasibility of developing a robust full-scale usable grass identification system for forensic purposes.

  16. Relationship between Self-Efficacy and Counseling Attitudes among First-Year College Students

    ERIC Educational Resources Information Center

    Tirpak, David M.; Schlosser, Lewis Z.

    2015-01-01

    The purpose of this study was to assess the relationship between a set of self-efficacy variables and a set of variables assessing attitudes toward counseling. Results revealed a significant relationship between self-efficacy and attitudes toward counseling among a sample of 253 first-year college students. Low perceptions of self-efficacy were…

  17. Improving Writing: Comparing the Responses of Eighth-Graders, Preservice Teachers and Experienced Teachers

    ERIC Educational Resources Information Center

    Grisham, Dana L.; Wolsey, Thomas D.

    2005-01-01

    This study investigates how middle school students and teachers in preservice and master of arts classes analyze writing samples. Three sets of participants analyzed and scored a common set of writings. Findings indicate that several intact classroom groups of eighth-graders, preservice teachers, and veteran teachers in a graduate reading program…

  18. Systems and Methods for Correcting Optical Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Yang, Ye (Inventor); Shear, Michael A. (Inventor); Soller, Babs R. (Inventor); Soyemi, Olusola O. (Inventor)

    2014-01-01

    We disclose measurement systems and methods for measuring analytes in target regions of samples that also include features overlying the target regions. The systems include: (a) a light source; (b) a detection system; (c) a set of at least first, second, and third light ports which transmit light from the light source to a sample and receive and direct light reflected from the sample to the detection system, generating a first set of data including information corresponding to both an internal target within the sample and features overlying the internal target, and a second set of data including information corresponding to features overlying the internal target; and (d) a processor configured to remove information characteristic of the overlying features from the first set of data using the first and second sets of data to produce corrected information representing the internal target.

  19. Systems and methods for correcting optical reflectance measurements

    NASA Technical Reports Server (NTRS)

    Yang, Ye (Inventor); Soller, Babs R. (Inventor); Soyemi, Olusola O. (Inventor); Shear, Michael A. (Inventor)

    2009-01-01

    We disclose measurement systems and methods for measuring analytes in target regions of samples that also include features overlying the target regions. The systems include: (a) a light source; (b) a detection system; (c) a set of at least first, second, and third light ports which transmit light from the light source to a sample and receive and direct light reflected from the sample to the detection system, generating a first set of data including information corresponding to both an internal target within the sample and features overlying the internal target, and a second set of data including information corresponding to features overlying the internal target; and (d) a processor configured to remove information characteristic of the overlying features from the first set of data using the first and second sets of data to produce corrected information representing the internal target.

  20. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN SOIL ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Soil data set contains analytical results for measurements of up to 11 metals in 91 soil samples over 91 households. Samples were taken by collecting surface soil in the yard of each residence. The primary metals of interest include lead (CAS# 7439-92-1), arsenic ...

  1. NHEXAS PHASE I ARIZONA STUDY--METALS-XRF IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals-XRF in Dust data set contains X-ray fluorescence (XRF) analytical results for measurements of up to 27 metals in 384 dust samples over 384 households. Samples were taken by collecting dust from the indoor floor areas in the main room and in the bedroom of the primary ...

  2. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--PESTICIDES IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticides in Dust data set contains analytical results for measurements of up to 8 pesticides in 91 dust samples over 91 households. Samples were taken by collecting dust from the indoor floor areas in the main room and in the bedroom of the primary resident. The primary p...

  3. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--PAHS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The PAHs in Dust data set contains the analytical results for measurements of up to 21 polynuclear aromatic hydrocarbons (PAHs) in 91 dust samples over 91 households. Samples were taken by collecting dust from the indoor floor areas from the main room and in the bedroom of the p...

  4. Modifiable Risk Factors for Attempted Suicide in Australian Clinical and Community Samples

    ERIC Educational Resources Information Center

    Carter, Gregory L.; Page, Andrew; Clover, Kerrie; Taylor, Richard

    2007-01-01

    Modifiable risk factors for suicide attempt require identification in clinical and community samples. The aim of this study was to determine if similar social and psychiatric factors are associated with suicide attempts in community and clinical settings and whether the magnitude of effect is greater in clinical populations. Two case-control…

  5. NHEXAS PHASE I MARYLAND STUDY--METALS IN WATER ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Water data set contains analytical results for measurements of up to 11 metals in 400 water samples over 80 households. One-liter samples of tap water were collected after a two minute flush from the tap identified by the resident as that most commonly used for dri...

  6. Reading and Comprehension Levels in a Sample of Urban, Low-Income Persons

    ERIC Educational Resources Information Center

    Delgado, Cheryl; Weitzel, Marilyn

    2013-01-01

    Objective: Because health literacy is related to healthcare outcomes, this study looked at reading and comprehension levels in a sample of urban, low-income persons. Design: This was a descriptive exploration of reading comprehension levels, controlled for medical problems that could impact on vision and therefore ability to read. Setting: Ninety…

  7. Characterization of polymer decomposition products by laser desorption mass spectrometry

    NASA Technical Reports Server (NTRS)

    Pallix, Joan B.; Lincoln, Kenneth A.; Miglionico, Charles J.; Roybal, Robert E.; Stein, Charles; Shively, Jon H.

    1993-01-01

    Laser desorption mass spectrometry has been used to characterize the ash-like substances formed on the surfaces of polymer matrix composites (PMC's) during exposure on LDEF. In an effort to minimize fragmentation, material was removed from the sample surfaces by laser desorption and desorbed neutrals were ionized by electron impact. Ions were detected in a time-of-flight mass analyzer which allows the entire mass spectrum to be collected for each laser shot. The method is ideal for these studies because only a small amount of ash is available for analysis. Three sets of samples were studied including C/polysulfone, C/polyimide and C/phenolic. Each set contains leading and trailing edge LDEF samples and their respective controls. In each case, the mass spectrum of the ash shows a number of high mass peaks which can be assigned to fragments of the associated polymer. These high mass peaks are not observed in the spectra of the control samples. In general, the results indicate that the ash is formed from decomposition of the polymer matrix.

  8. Sensitivity of different Trypanosoma vivax specific primers for the diagnosis of livestock trypanosomosis using different DNA extraction methods.

    PubMed

    Gonzales, J L; Loza, A; Chacon, E

    2006-03-15

    There are several T. vivax specific primers developed for PCR diagnosis. Most of these primers were validated under different DNA extraction methods and study designs leading to heterogeneity of results. The objective of the present study was to validate PCR as a diagnostic test for T. vivax trypanosomosis by means of determining the test sensitivity of different published specific primers with different sample preparations. Four different DNA extraction methods were used to test the sensitivity of PCR with four different primer sets. DNA was extracted directly from whole blood samples, blood dried on filter papers or blood dried on FTA cards. The results showed that the sensitivity of PCR with each primer set was highly dependant of the sample preparation and DNA extraction method. The highest sensitivities for all the primers tested were determined using DNA extracted from whole blood samples, while the lowest sensitivities were obtained when DNA was extracted from filter paper preparations. To conclude, the obtained results are discussed and a protocol for diagnosis and surveillance for T. vivax trypanosomosis is recommended.

  9. HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.

    PubMed

    Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo

    2016-03-01

    Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  10. Freezing of homogenized sputum samples for intermittent storage.

    PubMed

    Holz, O; Mücke, M; Zarza, P; Loppow, D; Jörres, R A; Magnussen, H

    2001-08-01

    Among the reasons that restrict the application of sputum induction in outpatient settings is the need for processing of samples within 2 h after induction. The aim of our study was to assess whether freezing is suitable for intermediate storage of sputum samples before processing. We compared differential cell counts between two sputum aliquots derived from the same sample. One aliquot was processed within 2 h after production and one, after it had been frozen under addition of dimethyl-sulfoxid (DMSO) and stored up to 10 days at -20 degrees C. Thirty-five samples were frozen immediately prior to preparation of cytospins, and 10 samples were frozen at an even earlier stage, directly after homogenization. In both sets of experiments we observed a significant relationship between frozen and native samples regarding macrophages, neutrophils and eosinophils, as indicated by respective intraclass correlation coefficients of 0.96, 0.96, and 0.93 in the first, and of 0.92, 0.96 and 0.77 in the second experiments. Our results indicate that the freezing of sputum samples at different stages of processing does not alter sputum morphology to an extent that affects the results of differential cell counts.

  11. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  12. Study design and data analysis considerations for the discovery of prognostic molecular biomarkers: a case study of progression free survival in advanced serous ovarian cancer.

    PubMed

    Qin, Li-Xuan; Levine, Douglas A

    2016-06-10

    Accurate discovery of molecular biomarkers that are prognostic of a clinical outcome is an important yet challenging task, partly due to the combination of the typically weak genomic signal for a clinical outcome and the frequently strong noise due to microarray handling effects. Effective strategies to resolve this challenge are in dire need. We set out to assess the use of careful study design and data normalization for the discovery of prognostic molecular biomarkers. Taking progression free survival in advanced serous ovarian cancer as an example, we conducted empirical analysis on two sets of microRNA arrays for the same set of tumor samples: arrays in one set were collected using careful study design (that is, uniform handling and randomized array-to-sample assignment) and arrays in the other set were not. We found that (1) handling effects can confound the clinical outcome under study as a result of chance even with randomization, (2) the level of confounding handling effects can be reduced by data normalization, and (3) good study design cannot be replaced by post-hoc normalization. In addition, we provided a practical approach to define positive and negative control markers for detecting handling effects and assessing the performance of a normalization method. Our work showcased the difficulty of finding prognostic biomarkers for a clinical outcome of weak genomic signals, illustrated the benefits of careful study design and data normalization, and provided a practical approach to identify handling effects and select a beneficial normalization method. Our work calls for careful study design and data analysis for the discovery of robust and translatable molecular biomarkers.

  13. Sewage Reflects the Microbiomes of Human Populations

    PubMed Central

    Newton, Ryan J.; McLellan, Sandra L.; Dila, Deborah K.; Vineis, Joseph H.; Morrison, Hilary G.; Eren, A. Murat

    2015-01-01

    ABSTRACT Molecular characterizations of the gut microbiome from individual human stool samples have identified community patterns that correlate with age, disease, diet, and other human characteristics, but resources for marker gene studies that consider microbiome trends among human populations scale with the number of individuals sampled from each population. As an alternative strategy for sampling populations, we examined whether sewage accurately reflects the microbial community of a mixture of stool samples. We used oligotyping of high-throughput 16S rRNA gene sequence data to compare the bacterial distribution in a stool data set to a sewage influent data set from 71 U.S. cities. On average, only 15% of sewage sample sequence reads were attributed to human fecal origin, but sewage recaptured most (97%) human fecal oligotypes. The most common oligotypes in stool matched the most common and abundant in sewage. After informatically separating sequences of human fecal origin, sewage samples exhibited ~3× greater diversity than stool samples. Comparisons among municipal sewage communities revealed the ubiquitous and abundant occurrence of 27 human fecal oligotypes, representing an apparent core set of organisms in U.S. populations. The fecal community variability among U.S. populations was significantly lower than among individuals. It clustered into three primary community structures distinguished by oligotypes from either: Bacteroidaceae, Prevotellaceae, or Lachnospiraceae/Ruminococcaceae. These distribution patterns reflected human population variation and predicted whether samples represented lean or obese populations with 81 to 89% accuracy. Our findings demonstrate that sewage represents the fecal microbial community of human populations and captures population-level traits of the human microbiome. PMID:25714718

  14. Diffuse Reflectance Spectroscopy for Total Carbon Analysis of Hawaiian Soils

    NASA Astrophysics Data System (ADS)

    McDowell, M. L.; Bruland, G. L.; Deenik, J. L.; Grunwald, S.; Uchida, R.

    2010-12-01

    Accurate assessment of total carbon (Ct) content is important for fertility and nutrient management of soils, as well as for carbon sequestration studies. The non-destructive analysis of soils by diffuse reflectance spectroscopy (DRS) is a potential supplement or alternative to the traditional time-consuming and costly combustion method of Ct analysis, especially in spatial or temporal studies where sample numbers are large. We investigate the use of the visible to near-infrared (VNIR) and mid-infrared (MIR) spectra of soils coupled with chemometric analysis to determine their Ct content. Our specific focus is on Hawaiian soils of agricultural importance. Though this technique has been introduced to the soil community, it has yet to be fully tested and used in practical applications for all soil types, and this is especially true for Hawaii. In short, DRS characterizes and differentiates materials based on the variation of the light reflected by a material at certain wavelengths. This spectrum is dependent on the material’s composition, structure, and physical state. Multivariate chemometric analysis unravels the information in a set of spectra that can help predict a property such as Ct. This study benefits from the remarkably diverse soils of Hawaii. Our sample set includes 216 soil samples from 145 pedons from the main Hawaiian Islands archived at the National Soil Survey Center in Lincoln, NE, along with more than 50 newly-collected samples from Kauai, Oahu, Molokai, and Maui. In total, over 90 series from 10 of the 12 soil orders are represented. The Ct values of these samples range from < 1% - 55%. We anticipate that the diverse nature of our sample set will ensure a model with applicability to a wide variety of soils, both in Hawaii and globally. We have measured the VNIR and MIR spectra of these samples and obtained their Ct values by dry combustion. Our initial analyses are conducted using only samples obtained from the Lincoln archive. In this preliminary case, we use Partial Least Squares (PLS) regression with cross validation to develop a prediction model for soils of unknown carbon content given only their spectral signature. We find R2 values of greater than 0.93 for the MIR spectra and 0.87 for the VNIR spectra, indicating a strong ability to correlate a soil’s spectrum with its Ct content. We build on these encouraging results by continuing chemometric analyses using the full data set, different data subsets, separate model calibration and validation groups, combined VNIR and MIR spectra, and exploring different data pretreatment options and variations to the PLS parameters.

  15. Constructing a Reward-Related Quality of Life Statistic in Daily Life—a Proof of Concept Study Using Positive Affect

    PubMed Central

    Verhagen, Simone J. W.; Simons, Claudia J. P.; van Zelst, Catherine; Delespaul, Philippe A. E. G.

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a ‘behavior setting’) with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available. PMID:29163294

  16. Switching characteristics in Cu:SiO2 by chemical soak methods for resistive random access memory (ReRAM)

    NASA Astrophysics Data System (ADS)

    Chin, Fun-Tat; Lin, Yu-Hsien; Yang, Wen-Luh; Liao, Chin-Hsuan; Lin, Li-Min; Hsiao, Yu-Ping; Chao, Tien-Sheng

    2015-01-01

    A limited copper (Cu)-source Cu:SiO2 switching layer composed of various Cu concentrations was fabricated using a chemical soaking (CS) technique. The switching layer was then studied for developing applications in resistive random access memory (ReRAM) devices. Observing the resistive switching mechanism exhibited by all the samples suggested that Cu conductive filaments formed and ruptured during the set/reset process. The experimental results indicated that the endurance property failure that occurred was related to the joule heating effect. Moreover, the endurance switching cycle increased as the Cu concentration decreased. In high-temperature tests, the samples demonstrated that the operating (set/reset) voltages decreased as the temperature increased, and an Arrhenius plot was used to calculate the activation energy of the set/reset process. In addition, the samples demonstrated stable data retention properties when baked at 85 °C, but the samples with low Cu concentrations exhibited short retention times in the low-resistance state (LRS) during 125 °C tests. Therefore, Cu concentration is a crucial factor in the trade-off between the endurance and retention properties; furthermore, the Cu concentration can be easily modulated using this CS technique.

  17. Evaluation of a Serum Lung Cancer Biomarker Panel.

    PubMed

    Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria

    2018-01-01

    A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results.

  18. Evaluation of a Serum Lung Cancer Biomarker Panel

    PubMed Central

    Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria

    2018-01-01

    Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783

  19. Sample Selection for Training Cascade Detectors.

    PubMed

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  20. Development of Strain-Specific Primers for Identification of Bifidobacterium bifidum BGN4.

    PubMed

    Youn, So Youn; Ji, Geun Eog; Han, Yoo Ri; Park, Myeong Soo

    2017-05-28

    Bifidobacterium bifidum BGN4 (BGN4) has many proven beneficial effects, including antiallergy and anticancer properties. It has been commercialized and used in several probiotic products, and thus strain-specific identification of this strain is very valuable for further strain-dependent physiological study. For this purpose, we developed novel multiplex polymerase chain reaction (PCR) primer sets for strain-specific detection of BGN4 in commercial products and fecal samples of animal models. The primer set was tested on seven strains of B. bifidum and 75 strains of the other Bifidobacterium species. The BGN4-specific regions were derived using megaBLAST against genome sequences of various B. bifidum databases and four sets of primers were designed. As a result, only BGN4 produced four PCR products simultaneously whereas the other strains did not. The PCR detection limit using BGN4-specific primer sets was 2.8 × 10 1 CFU/ml of BGN4. Those primer sets also detected and identified BGN4 in the probiotic products containing BNG4 and fecal samples from a BGN4-fed animal model with high specificity. Our results indicate that the PCR assay from this study is an efficient tool for the simple, rapid, and reliable identification of BGN4, for which probiotic strains are known.

  1. Screening for chronic kidney disease in a community-based diabetes cohort in rural Guatemala: a cross-sectional study

    PubMed Central

    Flood, David; Garcia, Pablo; Douglas, Kate; Hawkins, Jessica

    2018-01-01

    Objective Screening is a key strategy to address the rising burden of chronic kidney disease (CKD) in low-income and middle-income countries. However, there are few reports regarding the implementation of screening programmes in resource-limited settings. The objectives of this study are to (1) to share programmatic experiences implementing CKD screening in a rural, resource-limited setting and (2) to assess the burden of renal disease in a community-based diabetes programme in rural Guatemala. Design Cross-sectional assessment of glomerular filtration rate (GFR) and urine albumin. Setting Central Highlands of Guatemala. Participants We enrolled 144 adults with type 2 diabetes in a community-based CKD screening activity carried out by the sponsoring institution. Outcome measures Prevalence of renal disease and risk of CKD progression using Kidney Disease: Improving Global Outcomes definitions and classifications. Results We found that 57% of the sample met GFR and/or albuminuria criteria suggestive of CKD. Over half of the sample had moderate or greater increased risk for CKD progression, including nearly 20% who were classified as high or very high risk. Hypertension was common in the sample (42%), and glycaemic control was suboptimal (mean haemoglobin A1c 9.4%±2.5% at programme enrolment and 8.6%±2.3% at time of CKD screening). Conclusions The high burden of renal disease in our patient sample suggests an imperative to better understand the burden and risk factors of CKD in Guatemala. The implementation details we share reveal the tension between evidence-based CKD screening versus screening that can feasibly be delivered in resource-limited global settings. PMID:29358450

  2. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.)

    PubMed Central

    Amoroso, Gianluca; Ventura, Tomer; Elizur, Abigail; Carter, Chris G.

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general. PMID:27977809

  3. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.).

    PubMed

    Amoroso, Gianluca; Ventura, Tomer; Cobcroft, Jennifer M; Adams, Mark B; Elizur, Abigail; Carter, Chris G

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general.

  4. Comparison of diagnostic techniques for the detection of Cryptosporidium oocysts in animal samples

    PubMed Central

    Mirhashemi, Marzieh Ezzaty; Zintl, Annetta; Grant, Tim; Lucy, Frances E.; Mulcahy, Grace; De Waal, Theo

    2015-01-01

    While a large number of laboratory methods for the detection of Cryptosporidium oocysts in faecal samples are now available, their efficacy for identifying asymptomatic cases of cryptosporidiosis is poorly understood. This study was carried out to determine a reliable screening test for epidemiological studies in livestock. In addition, three molecular tests were compared to identify Cryptosporidium species responsible for the infection in cattle, sheep and horses. A variety of diagnostic tests including microscopic (Kinyoun's staining), immunological (Direct Fluorescence Antibody tests or DFAT), enzyme-linked immunosorbent assay (ELISA), and molecular methods (nested PCR) were compared to assess their ability to detect Cryptosporidium in cattle, horse and sheep faecal samples. The results indicate that the sensitivity and specificity of each test is highly dependent on the input samples; while Kinyoun's and DFAT proved to be reliable screening tools for cattle samples, DFAT and PCR analysis (targeted at the 18S rRNA gene fragment) were more sensitive for screening sheep and horse samples. Finally different PCR primer sets targeted at the same region resulted in the preferential amplification of certain Cryptosporidium species when multiple species were present in the sample. Therefore, for identification of Cryptosporidium spp. in the event of asymptomatic cryptosporidiosis, the combination of different 18S rRNA nested PCR primer sets is recommended for further epidemiological applications and also tracking the sources of infection. PMID:25662435

  5. Social and Physical Environmental Factors and Child Overweight in a Sample of American and Czech School-Aged Children: A Pilot Study

    ERIC Educational Resources Information Center

    Humenikova, Lenka; Gates, Gail E.

    2008-01-01

    Objective: To compare environmental factors that influence body mass index for age (BMI-for-age) between a sample of American and Czech school-aged children. Design: Pilot study. A parent questionnaire and school visits were used to collect data from parents and children. Setting: Public schools in 1 American and 2 Czech cities. Participants:…

  6. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    PubMed

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  7. Study on the lifetime of Mo/Si multilayer optics with pulsed EUV-source at the ETS

    NASA Astrophysics Data System (ADS)

    Schürmann, Mark; Yulin, Sergiy; Nesterenko, Viatcheslav; Feigl, Torsten; Kaiser, Norbert; Tkachenko, Boris; Schürmann, Max C.

    2011-06-01

    As EUV lithography is on its way into production stage, studies of optics contamination and cleaning under realistic conditions become more and more important. Due to this fact an Exposure Test Stand (ETS) has been constructed at XTREME technologies GmbH in collaboration with Fraunhofer IOF and with financial support of Intel Corporation. This test stand is equipped with a pulsed DPP source and allows for the simultaneous exposure of several samples. In the standard set-up four samples with an exposed area larger than 35 mm2 per sample can be exposed at a homogeneous intensity of 0.25 mW/mm2. A recent update of the ETS allows for simultaneous exposures of two samples with intensities up to 1.0 mW/mm2. The first application of this alternative set-up was a comparative study of carbon contamination rates induced by EUV radiation from the pulsed source with contamination rates induced by quasicontinuous synchrotron radiation. A modified gas-inlet system allows for the introduction of a second gas to the exposure chamber. This possibility was applied to investigate the efficiency of EUV-induced cleaning with different gas mixtures. In particular the enhancement of EUV-induced cleaning by addition of a second gas to the cleaning gas was studied.

  8. Investigation of rare and low-frequency variants using high-throughput sequencing with pooled DNA samples

    PubMed Central

    Wang, Jingwen; Skoog, Tiina; Einarsdottir, Elisabet; Kaartokallio, Tea; Laivuori, Hannele; Grauers, Anna; Gerdhem, Paul; Hytönen, Marjo; Lohi, Hannes; Kere, Juha; Jiao, Hong

    2016-01-01

    High-throughput sequencing using pooled DNA samples can facilitate genome-wide studies on rare and low-frequency variants in a large population. Some major questions concerning the pooling sequencing strategy are whether rare and low-frequency variants can be detected reliably, and whether estimated minor allele frequencies (MAFs) can represent the actual values obtained from individually genotyped samples. In this study, we evaluated MAF estimates using three variant detection tools with two sets of pooled whole exome sequencing (WES) and one set of pooled whole genome sequencing (WGS) data. Both GATK and Freebayes displayed high sensitivity, specificity and accuracy when detecting rare or low-frequency variants. For the WGS study, 56% of the low-frequency variants in Illumina array have identical MAFs and 26% have one allele difference between sequencing and individual genotyping data. The MAF estimates from WGS correlated well (r = 0.94) with those from Illumina arrays. The MAFs from the pooled WES data also showed high concordance (r = 0.88) with those from the individual genotyping data. In conclusion, the MAFs estimated from pooled DNA sequencing data reflect the MAFs in individually genotyped samples well. The pooling strategy can thus be a rapid and cost-effective approach for the initial screening in large-scale association studies. PMID:27633116

  9. A new approach to untargeted integration of high resolution liquid chromatography-mass spectrometry data.

    PubMed

    van der Kloet, Frans M; Hendriks, Margriet; Hankemeier, Thomas; Reijmers, Theo

    2013-11-01

    Because of its high sensitivity and specificity, hyphenated mass spectrometry has become the predominant method to detect and quantify metabolites present in bio-samples relevant for all sorts of life science studies being executed. In contrast to targeted methods that are dedicated to specific features, global profiling acquisition methods allow new unspecific metabolites to be analyzed. The challenge with these so-called untargeted methods is the proper and automated extraction and integration of features that could be of relevance. We propose a new algorithm that enables untargeted integration of samples that are measured with high resolution liquid chromatography-mass spectrometry (LC-MS). In contrast to other approaches limited user interaction is needed allowing also less experienced users to integrate their data. The large amount of single features that are found within a sample is combined to a smaller list of, compound-related, grouped feature-sets representative for that sample. These feature-sets allow for easier interpretation and identification and as important, easier matching over samples. We show that the automatic obtained integration results for a set of known target metabolites match those generated with vendor software but that at least 10 times more feature-sets are extracted as well. We demonstrate our approach using high resolution LC-MS data acquired for 128 samples on a lipidomics platform. The data was also processed in a targeted manner (with a combination of automatic and manual integration) using vendor software for a set of 174 targets. As our untargeted extraction procedure is run per sample and per mass trace the implementation of it is scalable. Because of the generic approach, we envision that this data extraction lipids method will be used in a targeted as well as untargeted analysis of many different kinds of TOF-MS data, even CE- and GC-MS data or MRM. The Matlab package is available for download on request and efforts are directed toward a user-friendly Windows executable. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. The pre-synaptic vesicle protein synaptotagmin is a novel biomarker for Alzheimer's disease.

    PubMed

    Öhrfelt, Annika; Brinkmalm, Ann; Dumurgier, Julien; Brinkmalm, Gunnar; Hansson, Oskar; Zetterberg, Henrik; Bouaziz-Amar, Elodie; Hugon, Jacques; Paquet, Claire; Blennow, Kaj

    2016-10-03

    Synaptic degeneration is a central pathogenic event in Alzheimer's disease that occurs early during the course of disease and correlates with cognitive symptoms. The pre-synaptic vesicle protein synaptotagmin-1 appears to be essential for the maintenance of an intact synaptic transmission and cognitive function. Synaptotagmin-1 in cerebrospinal fluid is a candidate Alzheimer biomarker for synaptic dysfunction that also may correlate with cognitive decline. In this study, a novel mass spectrometry-based assay for measurement of cerebrospinal fluid synaptotagmin-1 was developed, and was evaluated in two independent sample sets of patients and controls. Sample set I included cerebrospinal fluid samples from patients with dementia due to Alzheimer's disease (N = 17, age 52-86 years), patients with mild cognitive impairment due to Alzheimer's disease (N = 5, age 62-88 years), and controls (N = 17, age 41-82 years). Sample set II included cerebrospinal fluid samples from patients with dementia due to Alzheimer's disease (N = 24, age 52-84 years), patients with mild cognitive impairment due to Alzheimer's disease (N = 18, age 58-83 years), and controls (N = 36, age 43-80 years). The reproducibility of the novel method showed coefficients of variation of the measured synaptotagmin-1 peptide 215-223 (VPYSELGGK) and peptide 238-245 (HDIIGEFK) of 14 % or below. In both investigated sample sets, the CSF levels of synaptotagmin-1 were significantly increased in patients with dementia due to Alzheimer's disease (P ≤ 0.0001) and in patients with mild cognitive impairment due to Alzheimer's disease (P < 0.001). In addition, in sample set I the synaptotagmin-1 level was significantly higher in patients with mild cognitive impairment due to Alzheimer's disease compared with patients with dementia due to Alzheimer's disease (P ≤ 0.05). Cerebrospinal fluid synaptotagmin-1 is a promising biomarker to monitor synaptic dysfunction and degeneration in Alzheimer's disease that may be useful for clinical diagnosis, to monitor effect on synaptic integrity by novel drug candidates, and to explore pathophysiology directly in patients with Alzheimer's disease.

  11. Systematic versus random sampling in stereological studies.

    PubMed

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  12. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:

  13. Coal liquefaction process streams characterization and evaluation: Analysis of Black Thunder coal and liquefaction products from HRI Bench Unit Run CC-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, R.J.; Solum, M.S.

    This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less

  14. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  15. Counting missing values in a metabolite-intensity data set for measuring the analytical performance of a metabolomics platform.

    PubMed

    Huan, Tao; Li, Liang

    2015-01-20

    Metabolomics requires quantitative comparison of individual metabolites present in an entire sample set. Unfortunately, missing intensity values in one or more samples are very common. Because missing values can have a profound influence on metabolomic results, the extent of missing values found in a metabolomic data set should be treated as an important parameter for measuring the analytical performance of a technique. In this work, we report a study on the scope of missing values and a robust method of filling the missing values in a chemical isotope labeling (CIL) LC-MS metabolomics platform. Unlike conventional LC-MS, CIL LC-MS quantifies the concentration differences of individual metabolites in two comparative samples based on the mass spectral peak intensity ratio of a peak pair from a mixture of differentially labeled samples. We show that this peak-pair feature can be explored as a unique means of extracting metabolite intensity information from raw mass spectra. In our approach, a peak-pair peaking algorithm, IsoMS, is initially used to process the LC-MS data set to generate a CSV file or table that contains metabolite ID and peak ratio information (i.e., metabolite-intensity table). A zero-fill program, freely available from MyCompoundID.org , is developed to automatically find a missing value in the CSV file and go back to the raw LC-MS data to find the peak pair and, then, calculate the intensity ratio and enter the ratio value into the table. Most of the missing values are found to be low abundance peak pairs. We demonstrate the performance of this method in analyzing an experimental and technical replicate data set of human urine metabolome. Furthermore, we propose a standardized approach of counting missing values in a replicate data set as a way of gauging the extent of missing values in a metabolomics platform. Finally, we illustrate that applying the zero-fill program, in conjunction with dansylation CIL LC-MS, can lead to a marked improvement in finding significant metabolites that differentiate bladder cancer patients and their controls in a metabolomics study of 109 subjects.

  16. Detection and evaluation of DNA methylation markers found at SCGN and KLF14 loci to estimate human age.

    PubMed

    Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce

    2017-11-01

    Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.

  17. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  18. Metric Sex Determination of the Human Coxal Bone on a Virtual Sample using Decision Trees.

    PubMed

    Savall, Frédéric; Faruch-Bilfeld, Marie; Dedouit, Fabrice; Sans, Nicolas; Rousseau, Hervé; Rougé, Daniel; Telmon, Norbert

    2015-11-01

    Decision trees provide an alternative to multivariate discriminant analysis, which is still the most commonly used in anthropometric studies. Our study analyzed the metric characterization of a recent virtual sample of 113 coxal bones using decision trees for sex determination. From 17 osteometric type I landmarks, a dataset was built with five classic distances traditionally reported in the literature and six new distances selected using the two-step ratio method. A ten-fold cross-validation was performed, and a decision tree was established on two subsamples (training and test sets). The decision tree established on the training set included three nodes and its application to the test set correctly classified 92% of individuals. This percentage was similar to the data of the literature. The usefulness of decision trees has been demonstrated in numerous fields. They have been already used in sex determination, body mass prediction, and ancestry estimation. This study shows another use of decision trees enabling simple and accurate sex determination. © 2015 American Academy of Forensic Sciences.

  19. Urine cell-based DNA methylation classifier for monitoring bladder cancer.

    PubMed

    van der Heijden, Antoine G; Mengual, Lourdes; Ingelmo-Torres, Mercedes; Lozano, Juan J; van Rijt-van de Westerlo, Cindy C M; Baixauli, Montserrat; Geavlete, Bogdan; Moldoveanud, Cristian; Ene, Cosmin; Dinney, Colin P; Czerniak, Bogdan; Schalken, Jack A; Kiemeney, Lambertus A L M; Ribal, Maria J; Witjes, J Alfred; Alcaraz, Antonio

    2018-01-01

    Current standard methods used to detect and monitor bladder cancer (BC) are invasive or have low sensitivity. This study aimed to develop a urine methylation biomarker classifier for BC monitoring and validate this classifier in patients in follow-up for bladder cancer (PFBC). Voided urine samples ( N  = 725) from BC patients, controls, and PFBC were prospectively collected in four centers. Finally, 626 urine samples were available for analysis. DNA was extracted from the urinary cells and bisulfite modificated, and methylation status was analyzed using pyrosequencing. Cytology was available from a subset of patients ( N  = 399). In the discovery phase, seven selected genes from the literature ( CDH13 , CFTR , NID2 , SALL3 , TMEFF2 , TWIST1 , and VIM2 ) were studied in 111 BC and 57 control samples. This training set was used to develop a gene classifier by logistic regression and was validated in 458 PFBC samples (173 with recurrence). A three-gene methylation classifier containing CFTR , SALL3 , and TWIST1 was developed in the training set (AUC 0.874). The classifier achieved an AUC of 0.741 in the validation series. Cytology results were available for 308 samples from the validation set. Cytology achieved AUC 0.696 whereas the classifier in this subset of patients reached an AUC 0.768. Combining the methylation classifier with cytology results achieved an AUC 0.86 in the validation set, with a sensitivity of 96%, a specificity of 40%, and a positive and negative predictive value of 56 and 92%, respectively. The combination of the three-gene methylation classifier and cytology results has high sensitivity and high negative predictive value in a real clinical scenario (PFBC). The proposed classifier is a useful test for predicting BC recurrence and decrease the number of cystoscopies in the follow-up of BC patients. If only patients with a positive combined classifier result would be cystoscopied, 36% of all cystoscopies can be prevented.

  20. Testing for Salmonella in raw meat and poultry products collected at federally inspected establishments in the United States, 1998 through 2000.

    PubMed

    Rose, Bonnie E; Hill, Walter E; Umholtz, Robert; Ransom, Gerri M; James, William O

    2002-06-01

    The Food Safety and Inspection Service (FSIS) issued Pathogen Reduction; Hazard Analysis and Critical Control Point (HACCP) Systems; Final Rule (the PR/HACCP rule) on 25 July 1996. To verify that industry PR/HACCP systems are effective in controlling the contamination of raw meat and poultry products with human disease-causing bacteria, this rule sets product-specific Salmonella performance standards that must be met by slaughter establishments and establishments producing raw ground products. These performance standards are based on the prevalence of Salmonella as determined from the FSIS's nationwide microbial baseline studies and are expressed in terms of the maximum number of Salmonella-positive samples that are allowed in a given sample set. From 26 January 1998 through 31 December 2000, federal inspectors collected 98,204 samples and 1,502 completed sample sets for Salmonella analysis from large, small, and very small establishments that produced at least one of seven raw meat and poultry products: broilers, market hogs, cows and bulls, steers and heifers, ground beef, ground chicken, and ground turkey. Salmonella prevalence in most of the product categories was lower after the implementation of PR/HACCP than in pre-PR/HACCP baseline studies and surveys conducted by the FSIS. The results of 3 years of testing at establishments of all sizes combined show that >80% of the sample sets met the following Salmonella prevalence performance standards: 20.0% for broilers, 8.7% for market hogs, 2.7% for cows and bulls, 1.0% for steers and heifers, 7.5% for ground beef, 44.6% for ground chicken, and 49.9% for ground turkey. The decreased Salmonella prevalences may partly reflect industry improvements, such as improved process control, incorporation of antimicrobial interventions, and increased microbial-process control monitoring, in conjunction with PR/HACCP implementation.

  1. The clinical nurse specialist in an Irish hospital.

    PubMed

    Wickham, Sheelagh

    2011-01-01

    This study was set in an acute Irish health care setting and aimed to explore the activity of the clinical nurse specialist (CNS) in this setting. Quantitative methodology, using a valid and reliable questionnaire, provided descriptive statistics that gave accurate data on the total population of CNSs in the health care setting. The study was set in an acute-care 750-bed hospital that had 25 CNSs in practice. The sample consisted of all 25 CNSs who are the total population of CNSs working in the acute health care institution. The findings show the CNS to be active in the roles of researcher, educator, communicator, change agent, leader, and clinical specialist, but the level of activity varies between different roles. There is variety in the activity of CNSs in the various roles and to what extent they enact the role. The findings merit further study on CNS role activity and possible variables that influence role activity.

  2. Gram-negative and -positive bacteria differentiation in blood culture samples by headspace volatile compound analysis.

    PubMed

    Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören

    2016-12-01

    Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.

  3. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  4. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  5. A Comprehensive Analysis of Nuclear-Encoded Mitochondrial Genes in Schizophrenia.

    PubMed

    Gonçalves, Vanessa F; Cappi, Carolina; Hagen, Christian M; Sequeira, Adolfo; Vawter, Marquis P; Derkach, Andriy; Zai, Clement C; Hedley, Paula L; Bybjerg-Grauholm, Jonas; Pouget, Jennie G; Cuperfain, Ari B; Sullivan, Patrick F; Christiansen, Michael; Kennedy, James L; Sun, Lei

    2018-05-01

    The genetic risk factors of schizophrenia (SCZ), a severe psychiatric disorder, are not yet fully understood. Multiple lines of evidence suggest that mitochondrial dysfunction may play a role in SCZ, but comprehensive association studies are lacking. We hypothesized that variants in nuclear-encoded mitochondrial genes influence susceptibility to SCZ. We conducted gene-based and gene-set analyses using summary association results from the Psychiatric Genomics Consortium Schizophrenia Phase 2 (PGC-SCZ2) genome-wide association study comprising 35,476 cases and 46,839 control subjects. We applied the MAGMA method to three sets of nuclear-encoded mitochondrial genes: oxidative phosphorylation genes, other nuclear-encoded mitochondrial genes, and genes involved in nucleus-mitochondria crosstalk. Furthermore, we conducted a replication study using the iPSYCH SCZ sample of 2290 cases and 21,621 control subjects. In the PGC-SCZ2 sample, 1186 mitochondrial genes were analyzed, among which 159 had p values < .05 and 19 remained significant after multiple testing correction. A meta-analysis of 818 genes combining the PGC-SCZ2 and iPSYCH samples resulted in 104 nominally significant and nine significant genes, suggesting a polygenic model for the nuclear-encoded mitochondrial genes. Gene-set analysis, however, did not show significant results. In an in silico protein-protein interaction network analysis, 14 mitochondrial genes interacted directly with 158 SCZ risk genes identified in PGC-SCZ2 (permutation p = .02), and aldosterone signaling in epithelial cells and mitochondrial dysfunction pathways appeared to be overrepresented in this network of mitochondrial and SCZ risk genes. This study provides evidence that specific aspects of mitochondrial function may play a role in SCZ, but we did not observe its broad involvement even using a large sample. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  7. Feature Selection for Ridge Regression with Provable Guarantees.

    PubMed

    Paul, Saurabh; Drineas, Petros

    2016-04-01

    We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

  8. History and evaluation of national-scale geochemical data sets for the United States

    USGS Publications Warehouse

    Smith, David B.; Smith, Steven M.; Horton, John D.

    2013-01-01

    Six national-scale, or near national-scale, geochemical data sets for soils or stream sediments exist for the United States. The earliest of these, here termed the ‘Shacklette’ data set, was generated by a U.S. Geological Survey (USGS) project conducted from 1961 to 1975. This project used soil collected from a depth of about 20 cm as the sampling medium at 1323 sites throughout the conterminous U.S. The National Uranium Resource Evaluation Hydrogeochemical and Stream Sediment Reconnaissance (NURE-HSSR) Program of the U.S. Department of Energy was conducted from 1975 to 1984 and collected either stream sediments, lake sediments, or soils at more than 378,000 sites in both the conterminous U.S. and Alaska. The sampled area represented about 65% of the nation. The Natural Resources Conservation Service (NRCS), from 1978 to 1982, collected samples from multiple soil horizons at sites within the major crop-growing regions of the conterminous U.S. This data set contains analyses of more than 3000 samples. The National Geochemical Survey, a USGS project conducted from 1997 to 2009, used a subset of the NURE-HSSR archival samples as its starting point and then collected primarily stream sediments, with occasional soils, in the parts of the U.S. not covered by the NURE-HSSR Program. This data set contains chemical analyses for more than 70,000 samples. The USGS, in collaboration with the Mexican Geological Survey and the Geological Survey of Canada, initiated soil sampling for the North American Soil Geochemical Landscapes Project in 2007. Sampling of three horizons or depths at more than 4800 sites in the U.S. was completed in 2010, and chemical analyses are currently ongoing. The NRCS initiated a project in the 1990s to analyze the various soil horizons from selected pedons throughout the U.S. This data set currently contains data from more than 1400 sites. This paper (1) discusses each data set in terms of its purpose, sample collection protocols, and analytical methods; and (2) evaluates each data set in terms of its appropriateness as a national-scale geochemical database and its usefulness for national-scale geochemical mapping.

  9. Targeted quantitative analysis of Streptococcus pyogenes virulence factors by multiple reaction monitoring.

    PubMed

    Lange, Vinzenz; Malmström, Johan A; Didion, John; King, Nichole L; Johansson, Björn P; Schäfer, Juliane; Rameseder, Jonathan; Wong, Chee-Hong; Deutsch, Eric W; Brusniak, Mi-Youn; Bühlmann, Peter; Björck, Lars; Domon, Bruno; Aebersold, Ruedi

    2008-08-01

    In many studies, particularly in the field of systems biology, it is essential that identical protein sets are precisely quantified in multiple samples such as those representing differentially perturbed cell states. The high degree of reproducibility required for such experiments has not been achieved by classical mass spectrometry-based proteomics methods. In this study we describe the implementation of a targeted quantitative approach by which predetermined protein sets are first identified and subsequently quantified at high sensitivity reliably in multiple samples. This approach consists of three steps. First, the proteome is extensively mapped out by multidimensional fractionation and tandem mass spectrometry, and the data generated are assembled in the PeptideAtlas database. Second, based on this proteome map, peptides uniquely identifying the proteins of interest, proteotypic peptides, are selected, and multiple reaction monitoring (MRM) transitions are established and validated by MS2 spectrum acquisition. This process of peptide selection, transition selection, and validation is supported by a suite of software tools, TIQAM (Targeted Identification for Quantitative Analysis by MRM), described in this study. Third, the selected target protein set is quantified in multiple samples by MRM. Applying this approach we were able to reliably quantify low abundance virulence factors from cultures of the human pathogen Streptococcus pyogenes exposed to increasing amounts of plasma. The resulting quantitative protein patterns enabled us to clearly define the subset of virulence proteins that is regulated upon plasma exposure.

  10. A machine learning approach for classification of anatomical coverage in CT

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew

    2016-03-01

    Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.

  11. An active learning representative subset selection method using net analyte signal.

    PubMed

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-05

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. An active learning representative subset selection method using net analyte signal

    NASA Astrophysics Data System (ADS)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  13. Exposure to potentially toxic hydrocarbons and halocarbons released from the dialyzer and tubing set during hemodialysis.

    PubMed

    Lee, Hyun Ji Julie; Meinardi, Simone; Pahl, Madeleine V; Vaziri, Nostratola D; Blake, Donald R

    2012-10-01

    Although much is known about the effect of chronic kidney failure and dialysis on the composition of solutes in plasma, little is known about their impact on the composition of gaseous compounds in exhaled breath. This study was designed to explore the effect of uremia and the hemodialysis (HD) procedure on the composition of exhaled breath. Breath samples were collected from 10 dialysis patients immediately before, during, and after a dialysis session. To determine the potential introduction of gaseous compounds from dialysis components, gasses emitted from dialyzers, tubing set, dialysate, and water supplies were collected. Prospective cohort study. 10 HD patients and 10 age-matched healthy individuals. Predictors include the dialyzers, tubing set, dialysate, and water supplies before, during, and after dialysis. Changes in the composition of exhaled breath. A 5-column/detector gas chromatography system was used to measure hydrocarbon, halocarbon, oxygenate, and alkyl nitrate compounds. Concentrations of 14 hydrocarbons and halocarbons in patients' breath rapidly increased after the onset of the HD treatment. All 14 compounds and 5 others not found in patients' breath were emitted from the dialyzers and tubing sets. Contrary to earlier reports, exhaled breath ethane concentrations in our dialysis patients were virtually unchanged during the HD treatment. Single-center study with a small sample size may limit the generalizability of the findings. The study documented the release of several potentially toxic hydrocarbons and halocarbons to patients from the dialyzer and tubing sets during the HD procedure. Because long-term exposure to these compounds may contribute to the morbidity and mortality in dialysis population, this issue should be considered in the manufacturing of the new generation of dialyzers and dialysis tubing sets. Copyright © 2012 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  14. Comparative Characterization of Crofelemer Samples Using Data Mining and Machine Learning Approaches With Analytical Stability Data Sets.

    PubMed

    Nariya, Maulik K; Kim, Jae Hyun; Xiong, Jian; Kleindl, Peter A; Hewarathna, Asha; Fisher, Adam C; Joshi, Sangeeta B; Schöneich, Christian; Forrest, M Laird; Middaugh, C Russell; Volkin, David B; Deeds, Eric J

    2017-11-01

    There is growing interest in generating physicochemical and biological analytical data sets to compare complex mixture drugs, for example, products from different manufacturers. In this work, we compare various crofelemer samples prepared from a single lot by filtration with varying molecular weight cutoffs combined with incubation for different times at different temperatures. The 2 preceding articles describe experimental data sets generated from analytical characterization of fractionated and degraded crofelemer samples. In this work, we use data mining techniques such as principal component analysis and mutual information scores to help visualize the data and determine discriminatory regions within these large data sets. The mutual information score identifies chemical signatures that differentiate crofelemer samples. These signatures, in many cases, would likely be missed by traditional data analysis tools. We also found that supervised learning classifiers robustly discriminate samples with around 99% classification accuracy, indicating that mathematical models of these physicochemical data sets are capable of identifying even subtle differences in crofelemer samples. Data mining and machine learning techniques can thus identify fingerprint-type attributes of complex mixture drugs that may be used for comparative characterization of products. Copyright © 2017 American Pharmacists Association®. All rights reserved.

  15. RNA sequencing of transformed lymphoblastoid cells from siblings discordant for autism spectrum disorders reveals transcriptomic and functional alterations: Evidence for sex-specific effects.

    PubMed

    Tylee, Daniel S; Espinoza, Alfred J; Hess, Jonathan L; Tahir, Muhammad A; McCoy, Sarah Y; Rim, Joshua K; Dhimal, Totadri; Cohen, Ori S; Glatt, Stephen J

    2017-03-01

    Genome-wide expression studies of samples derived from individuals with autism spectrum disorder (ASD) and their unaffected siblings have been widely used to shed light on transcriptomic differences associated with this condition. Females have historically been under-represented in ASD genomic studies. Emerging evidence from studies of structural genetic variants and peripheral biomarkers suggest that sex-differences may exist in the biological correlates of ASD. Relatively few studies have explicitly examined whether sex-differences exist in the transcriptomic signature of ASD. The present study quantified genome-wide expression values by performing RNA sequencing on transformed lymphoblastoid cell lines and identified transcripts differentially expressed between same-sex, proximal-aged sibling pairs. We found that performing separate analyses for each sex improved our ability to detect ASD-related transcriptomic differences; we observed a larger number of dysregulated genes within our smaller set of female samples (n = 12 sibling pairs), as compared with the set of male samples (n = 24 sibling pairs), with small, but statistically significant overlap between the sexes. Permutation-based gene-set analyses and weighted gene co-expression network analyses also supported the idea that the transcriptomic signature of ASD may differ between males and females. We discuss our findings in the context of the relevant literature, underscoring the need for future ASD studies to explicitly account for differences between the sexes. Autism Res 2017, 10: 439-455. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  16. Nonpareil 3: Fast Estimation of Metagenomic Coverage and Sequence Diversity.

    PubMed

    Rodriguez-R, Luis M; Gunturu, Santosh; Tiedje, James M; Cole, James R; Konstantinidis, Konstantinos T

    2018-01-01

    Estimations of microbial community diversity based on metagenomic data sets are affected, often to an unknown degree, by biases derived from insufficient coverage and reference database-dependent estimations of diversity. For instance, the completeness of reference databases cannot be generally estimated since it depends on the extant diversity sampled to date, which, with the exception of a few habitats such as the human gut, remains severely undersampled. Further, estimation of the degree of coverage of a microbial community by a metagenomic data set is prohibitively time-consuming for large data sets, and coverage values may not be directly comparable between data sets obtained with different sequencing technologies. Here, we extend Nonpareil, a database-independent tool for the estimation of coverage in metagenomic data sets, to a high-performance computing implementation that scales up to hundreds of cores and includes, in addition, a k -mer-based estimation as sensitive as the original alignment-based version but about three hundred times as fast. Further, we propose a metric of sequence diversity ( N d ) derived directly from Nonpareil curves that correlates well with alpha diversity assessed by traditional metrics. We use this metric in different experiments demonstrating the correlation with the Shannon index estimated on 16S rRNA gene profiles and show that N d additionally reveals seasonal patterns in marine samples that are not captured by the Shannon index and more precise rankings of the magnitude of diversity of microbial communities in different habitats. Therefore, the new version of Nonpareil, called Nonpareil 3, advances the toolbox for metagenomic analyses of microbiomes. IMPORTANCE Estimation of the coverage provided by a metagenomic data set, i.e., what fraction of the microbial community was sampled by DNA sequencing, represents an essential first step of every culture-independent genomic study that aims to robustly assess the sequence diversity present in a sample. However, estimation of coverage remains elusive because of several technical limitations associated with high computational requirements and limiting statistical approaches to quantify diversity. Here we described Nonpareil 3, a new bioinformatics algorithm that circumvents several of these limitations and thus can facilitate culture-independent studies in clinical or environmental settings, independent of the sequencing platform employed. In addition, we present a new metric of sequence diversity based on rarefied coverage and demonstrate its use in communities from diverse ecosystems.

  17. Another Look at College Student's Ratings of Course Quality: Data from Penn State Student Surveys in Three Settings

    ERIC Educational Resources Information Center

    Willits, Fern; Brennan, Mark

    2017-01-01

    This study assessed the relationships of student attributes, course characteristics and course outcomes to college students' ratings of course quality in three types of settings. The analysis utilised data from online surveys of samples of college students conducted in 2011 and 2012 at the Pennsylvania State University. Included in the analysis…

  18. Goal Setting, Decision-Making Skills and Academic Performance of Undergraduate Distance Learners: Implications for Retention and Support Services

    ERIC Educational Resources Information Center

    Tanglang, Nebath; Ibrahim, Aminu Kazeem

    2015-01-01

    The study adopted an ex-post facto research design. Randomization sampling technique was used to select 346 undergraduate distance learners and the learners were grouped into four, High and Low Goal setter learners and High and Low Decision-making skills learners. The instruments for data collection were Undergraduate Academic Goal Setting Scale…

  19. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.

  20. Evaluation of Primary Immunization Coverage of Infants Under Universal Immunization Programme in an Urban Area of Bangalore City Using Cluster Sampling and Lot Quality Assurance Sampling Techniques

    PubMed Central

    K, Punith; K, Lalitha; G, Suman; BS, Pradeep; Kumar K, Jayanth

    2008-01-01

    Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area. PMID:19876474

  1. Validation and Application of a PCR Primer Set to Quantify Fungal Communities in the Soil Environment by Real-Time Quantitative PCR

    PubMed Central

    Chemidlin Prévost-Bouré, Nicolas; Christen, Richard; Dequiedt, Samuel; Mougel, Christophe; Lelièvre, Mélanie; Jolivet, Claudy; Shahbazkia, Hamid Reza; Guillou, Laure; Arrouays, Dominique; Ranjard, Lionel

    2011-01-01

    Fungi constitute an important group in soil biological diversity and functioning. However, characterization and knowledge of fungal communities is hampered because few primer sets are available to quantify fungal abundance by real-time quantitative PCR (real-time Q-PCR). The aim in this study was to quantify fungal abundance in soils by incorporating, into a real-time Q-PCR using the SYBRGreen® method, a primer set already used to study the genetic structure of soil fungal communities. To satisfy the real-time Q-PCR requirements to enhance the accuracy and reproducibility of the detection technique, this study focused on the 18S rRNA gene conserved regions. These regions are little affected by length polymorphism and may provide sufficiently small targets, a crucial criterion for enhancing accuracy and reproducibility of the detection technique. An in silico analysis of 33 primer sets targeting the 18S rRNA gene was performed to select the primer set with the best potential for real-time Q-PCR: short amplicon length; good fungal specificity and coverage. The best consensus between specificity, coverage and amplicon length among the 33 sets tested was the primer set FR1 / FF390. This in silico analysis of the specificity of FR1 / FF390 also provided additional information to the previously published analysis on this primer set. The specificity of the primer set FR1 / FF390 for Fungi was validated in vitro by cloning - sequencing the amplicons obtained from a real time Q-PCR assay performed on five independent soil samples. This assay was also used to evaluate the sensitivity and reproducibility of the method. Finally, fungal abundance in samples from 24 soils with contrasting physico-chemical and environmental characteristics was examined and ranked to determine the importance of soil texture, organic carbon content, C∶N ratio and land use in determining fungal abundance in soils. PMID:21931659

  2. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  3. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  4. Connecting Teaching and Learning: History, Evolution, and Case Studies of Teacher Work Sample Methodology

    ERIC Educational Resources Information Center

    Rosselli, Hilda, Ed.; Girod, Mark, Ed.; Brodsky, Meredith, Ed.

    2011-01-01

    As accountability in education has become an increasingly prominent topic, teacher preparation programs are being asked to provide credible evidence that their teacher candidates can impact student learning. Teacher Work Samples, first developed 30 years ago, have emerged as an effective method of quantifying the complex set of tasks that comprise…

  5. Quantifying Ruminal Digestion of Organic Matter and Neutral Detergent Fiber Using Omasal Sampling in Cattle--A Meta-Analysis

    USDA-ARS?s Scientific Manuscript database

    A data set from 32 studies (122 diets) was used to evaluate the accuracy and precision of the omasal sampling technique by investigating the relationships between ruminal and total digestion of neutral detergent fiber (NDF), between intake and apparent and true ruminal digestion of organic matter (O...

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS/XRF IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals-XRF in Dust data set contains X-ray fluorescence (XRF) analytical results for measurements of up to 27 metals in 91 dust samples over 91 households. Samples were taken by collecting dust from the indoor floor areas in the main room and in the bedroom of the primary re...

  7. Estimation of the spatial autocorrelation function: consequences of sampling dynamic populations in space and time

    Treesearch

    Patrick C. Tobin

    2004-01-01

    The estimation of spatial autocorrelation in spatially- and temporally-referenced data is fundamental to understanding an organism's population biology. I used four sets of census field data, and developed an idealized space-time dynamic system, to study the behavior of spatial autocorrelation estimates when a practical method of sampling is employed. Estimates...

  8. Holistic Evaluation of Writing Samples for Placement in Post-Secondary English Composition Courses.

    ERIC Educational Resources Information Center

    Guerrero, Barry J.; Robison, Ruth E.

    A study was conducted by the Student Development Center of the University of Hawaii at Hilo to develop a writing placement procedure in a community college setting which would be practical, reliable, and valid. The key to this procedure was an English composition placement device that could help readers rate, holistically, writing samples written…

  9. Hydrogeologic framework and sampling design for an assessment of agricultural pesticides in ground water in Pennsylvania

    USGS Publications Warehouse

    Lindsey, Bruce D.; Bickford, Tammy M.

    1999-01-01

    State agencies responsible for regulating pesticides are required by the U.S. Environmental Protection Agency to develop state management plans for specific pesticides. A key part of these management plans includes assessing the potential for contamination of ground water by pesticides throughout the state. As an example of how a statewide assessment could be implemented, a plan is presented for the Commonwealth of Pennsylvania to illustrate how a hydrogeologic framework can be used as a basis for sampling areas within a state with the highest likelihood of having elevated pesticide concentrations in ground water. The framework was created by subdividing the state into 20 areas on the basis of physiography and aquifer type. Each of these 20 hydrogeologic settings is relatively homogeneous with respect to aquifer susceptibility and pesticide use—factors that would be likely to affect pesticide concentrations in ground water. Existing data on atrazine occurrence in ground water was analyzed to determine (1) which areas of the state already have sufficient samples collected to make statistical comparisons among hydrogeologic settings, and (2) the effect of factors such as land use and aquifer characteristics on pesticide occurrence. The theoretical vulnerability and the results of the data analysis were used to rank each of the 20 hydrogeologic settings on the basis of vulnerability of ground water to contamination by pesticides. Example sampling plans are presented for nine of the hydrogeologic settings that lack sufficient data to assess vulnerability to contamination. Of the highest priority areas of the state, two out of four have been adequately sampled, one of the three areas of moderate to high priority has been adequately sampled, four of the nine areas of moderate to low priority have been adequately sampled, and none of the three low priority areas have been sampled.Sampling to date has shown that, even in the most vulnerable hydrogeologic settings, pesticide concentrations in ground water rarely exceed U.S. Environmental Protection Agency Drinking Water Standards or Health Advisory Levels. Analyses of samples from 1,159 private water supplies reveal only 3 sites for which samples with concentrations of pesticides exceeded drinking-water standards. In most cases, samples with elevated concentrations could be traced to point sources at pesticide loading or mixing areas. These analyses included data from some of the most vulnerable areas of the state, indicating that it is highly unlikely that pesticide concentrations in water from wells in other areas of the state would exceed the drinking-water standards unless a point source of contamination were present. Analysis of existing data showed that water from wells in areas of the state underlain by carbonate (limestone and dolomite) bedrock, which commonly have a high percentage of corn production, was much more likely to have pesticides detected. Application of pesticides to the land surface generally has not caused concentrations of the five state priority pesticides in ground water to exceed health standards; however, this study has not evaluated the potential human health effects of mixtures of pesticides or pesticide degradation products in drinking water. This study also has not determined whether concentrations in ground water are stable, increasing, or decreasing.

  10. Influence of glass-ionomer cement on the interface and setting reaction of mineral trioxide aggregate when used as a furcal repair material using laser Raman spectroscopic analysis.

    PubMed

    Nandini, Suresh; Ballal, Suma; Kandaswamy, Deivanayagam

    2007-02-01

    The prolonged setting time of mineral trioxide aggregate (MTA) is the main disadvantage of this material. This study analyzes the influence of glass-ionomer cement on the setting of MTA using laser Raman spectroscopy (LRS). Forty hollow glass molds were taken in which MTA was placed. In Group I specimens, MTA was layered with glass-ionomer cement after 45 minutes. Similar procedures were done for Groups II and III at 4 hours and 3 days, respectively. No glass ionomer was added in Group IV, which were then considered as control samples. Each sample was scanned at various time intervals. At each time interval, the interface between MTA and glass-ionomer cement was also scanned (excluding Group IV). The spectral analysis proved that placement of glass-ionomer cement over MTA after 45 minutes did not affect its setting reaction and calcium salts may be formed in the interface of these two materials.

  11. Optical properties (bidirectional reflectance distribution function) of shot fabric.

    PubMed

    Lu, R; Koenderink, J J; Kappers, A M

    2000-11-01

    To study the optical properties of materials, one needs a complete set of the angular distribution functions of surface scattering from the materials. Here we present a convenient method for collecting a large set of bidirectional reflectance distribution function (BRDF) samples in the hemispherical scattering space. Material samples are wrapped around a right-circular cylinder and irradiated by a parallel light source, and the scattered radiance is collected by a digital camera. We tilted the cylinder around its center to collect the BRDF samples outside the plane of incidence. This method can be used with materials that have isotropic and anisotropic scattering properties. We demonstrate this method in a detailed investigation of shot fabrics. The warps and the fillings of shot fabrics are dyed different colors so that the fabric appears to change color at different viewing angles. These color-changing characteristics are found to be related to the physical and geometrical structure of shot fabric. Our study reveals that the color-changing property of shot fabrics is due mainly to an occlusion effect.

  12. Challenges in projecting clustering results across gene expression-profiling datasets.

    PubMed

    Lusa, Lara; McShane, Lisa M; Reid, James F; De Cecco, Loris; Ambrogi, Federico; Biganzoli, Elia; Gariboldi, Manuela; Pierotti, Marco A

    2007-11-21

    Gene expression microarray studies for several types of cancer have been reported to identify previously unknown subtypes of tumors. For breast cancer, a molecular classification consisting of five subtypes based on gene expression microarray data has been proposed. These subtypes have been reported to exist across several breast cancer microarray studies, and they have demonstrated some association with clinical outcome. A classification rule based on the method of centroids has been proposed for identifying the subtypes in new collections of breast cancer samples; the method is based on the similarity of the new profiles to the mean expression profile of the previously identified subtypes. Previously identified centroids of five breast cancer subtypes were used to assign 99 breast cancer samples, including a subset of 65 estrogen receptor-positive (ER+) samples, to five breast cancer subtypes based on microarray data for the samples. The effect of mean centering the genes (i.e., transforming the expression of each gene so that its mean expression is equal to 0) on subtype assignment by method of centroids was assessed. Further studies of the effect of mean centering and of class prevalence in the test set on the accuracy of method of centroids classifications of ER status were carried out using training and test sets for which ER status had been independently determined by ligand-binding assay and for which the proportion of ER+ and ER- samples were systematically varied. When all 99 samples were considered, mean centering before application of the method of centroids appeared to be helpful for correctly assigning samples to subtypes, as evidenced by the expression of genes that had previously been used as markers to identify the subtypes. However, when only the 65 ER+ samples were considered for classification, many samples appeared to be misclassified, as evidenced by an unexpected distribution of ER+ samples among the resultant subtypes. When genes were mean centered before classification of samples for ER status, the accuracy of the ER subgroup assignments was highly dependent on the proportion of ER+ samples in the test set; this effect of subtype prevalence was not seen when gene expression data were not mean centered. Simple corrections such as mean centering of genes aimed at microarray platform or batch effect correction can have undesirable consequences because patient population effects can easily be confused with these assay-related effects. Careful thought should be given to the comparability of the patient populations before attempting to force data comparability for purposes of assigning subtypes to independent subjects.

  13. Time to stabilization in single leg drop jump landings: an examination of calculation methods and assessment of differences in sample rate, filter settings and trial length on outcome values.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2015-01-01

    Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. From multispectral imaging of autofluorescence to chemical and sensory images of lipid oxidation in cod caviar paste.

    PubMed

    Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter

    2014-05-01

    The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  16. Understanding the relationship between proactive and reactive aggression, and cyberbullying across United States and Singapore adolescent samples.

    PubMed

    Ang, Rebecca P; Huan, Vivien S; Florell, Dan

    2014-01-01

    This study examined cyberbullying among adolescents across United States and Singapore samples. Specifically, the purpose of the investigation was to study the differential associations between proactive and reactive aggression, and cyberbullying across two cultures. A total of 425 adolescents from the United States (M age = 13 years) and a total of 332 adolescents from Singapore (M age = 14.2 years) participated in the study. Results of the moderator analyses suggested that nationality was not a moderator of the relationship between proactive aggression and cyberbullying, and between reactive aggression and cyberbullying. As expected, findings showed proactive aggression to be positively associated with cyberbullying, after controlling for reactive aggression, across both samples. Likewise, as hypothesized, reactive aggression and cyberbullying was not found to be significant after controlling for proactive aggression across both samples. Implications of these findings were discussed: (a) Proactive aggression is a possible risk factor for both bullying and cyberbullying; (b) proactive and reactive aggression could be argued to be distinct as they have different correlates-only proactive aggression contributed to cyberbullying after controlling for reactive aggression; (c) this research extends previous work and contributes toward cross-cultural work using similar and comparable measures across different samples; and (d) prevention and intervention programs targeted at proactive aggressive adolescents could adopt a two-pronged approach by changing mind sets, and by understanding and adopting a set of rules for Internet etiquette.

  17. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition

    PubMed Central

    Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941

  18. The living dead: bacterial community structure of a cadaver at the onset and end of the bloat stage of decomposition.

    PubMed

    Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.

  19. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  20. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  1. Feasibility of reusing time-matched controls in an overlapping cohort.

    PubMed

    Delcoigne, Bénédicte; Hagenbuch, Niels; Schelin, Maria Ec; Salim, Agus; Lindström, Linda S; Bergh, Jonas; Czene, Kamila; Reilly, Marie

    2018-06-01

    The methods developed for secondary analysis of nested case-control data have been illustrated only in simplified settings in a common cohort and have not found their way into biostatistical practice. This paper demonstrates the feasibility of reusing prior nested case-control data in a realistic setting where a new outcome is available in an overlapping cohort where no new controls were gathered and where all data have been anonymised. Using basic information about the background cohort and sampling criteria, the new cases and prior data are "aligned" to identify the common underlying study base. With this study base, a Kaplan-Meier table of the prior outcome extracts the risk sets required to calculate the weights to assign to the controls to remove the sampling bias. A weighted Cox regression, implemented in standard statistical software, provides unbiased hazard ratios. Using the method to compare cases of contralateral breast cancer to available controls from a prior study of metastases, we identified a multifocal tumor as a risk factor that has not been reported previously. We examine the sensitivity of the method to an imperfect weighting scheme and discuss its merits and pitfalls to provide guidance for its use in medical research studies.

  2. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  3. The interpoint distance distribution as a descriptor of point patterns, with an application to spatial disease clustering.

    PubMed

    Bonetti, Marco; Pagano, Marcello

    2005-03-15

    The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.

  4. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    PubMed

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was <90% (and in many cases<50%). Consequently, if identifying as many existing infections as possible is important, then the criteria for considering a quarter positive should be a single colony (from a 0.01-mL milk sample) isolated (definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97% in the weekly data, suggesting that for most species, definition A may be acceptable. For coagulase-negative staphylococci, definitions B (2 colonies from a 0.01-mL milk sample) raised the Sp to 92 and 95% in the daily and weekly data, respectively. For "any organism," using definition B raised the Sp to 88 and 93% in the 2 data sets, respectively. The final choice of definition will depend on the objectives of study or control program for which the sample was collected. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  6. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  7. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  8. Identification of potential serum peptide biomarkers of biliary tract cancer using MALDI MS profiling

    PubMed Central

    2014-01-01

    Background The aim of this discovery study was the identification of peptide serum biomarkers for detecting biliary tract cancer (BTC) using samples from healthy volunteers and benign cases of biliary disease as control groups. This work was based on the hypothesis that cancer-specific exopeptidases exist and that their activities in serum can generate cancer-predictive peptide fragments from circulating proteins during coagulation. Methods This case control study used a semi-automated platform incorporating polypeptide extraction linked to matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) to profile 92 patient serum samples. Predictive models were generated to test a validation serum set from BTC cases and healthy volunteers. Results Several peptide peaks were found that could significantly differentiate BTC patients from healthy controls and benign biliary disease. A predictive model resulted in a sensitivity of 100% and a specificity of 93.8% in detecting BTC in the validation set, whilst another model gave a sensitivity of 79.5% and a specificity of 83.9% in discriminating BTC from benign biliary disease samples in the training set. Discriminatory peaks were identified by tandem MS as fragments of abundant clotting proteins. Conclusions Serum MALDI MS peptide signatures can accurately discriminate patients with BTC from healthy volunteers. PMID:24495412

  9. Prediction of physical-chemical properties of crude oils by 1H NMR analysis of neat samples and chemometrics.

    PubMed

    Masili, Alice; Puligheddu, Sonia; Sassu, Lorenzo; Scano, Paola; Lai, Adolfo

    2012-11-01

    In this work, we report the feasibility study to predict the properties of neat crude oil samples from 300-MHz NMR spectral data and partial least squares (PLS) regression models. The study was carried out on 64 crude oil samples obtained from 28 different extraction fields and aims at developing a rapid and reliable method for characterizing the crude oil in a fast and cost-effective way. The main properties generally employed for evaluating crudes' quality and behavior during refining were measured and used for calibration and testing of the PLS models. Among these, the UOP characterization factor K (K(UOP)) used to classify crude oils in terms of composition, density (D), total acidity number (TAN), sulfur content (S), and true boiling point (TBP) distillation yields were investigated. Test set validation with an independent set of data was used to evaluate model performance on the basis of standard error of prediction (SEP) statistics. Model performances are particularly good for K(UOP) factor, TAN, and TPB distillation yields, whose standard error of calibration and SEP values match the analytical method precision, while the results obtained for D and S are less accurate but still useful for predictions. Furthermore, a strategy that reduces spectral data preprocessing and sample preparation procedures has been adopted. The models developed with such an ample crude oil set demonstrate that this methodology can be applied with success to modern refining process requirements. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Correlation between human maternal-fetal placental transfer and molecular weight of PCB and dioxin congeners/isomers.

    PubMed

    Mori, Chisato; Nakamura, Noriko; Todaka, Emiko; Fujisaki, Takeyoshi; Matsuno, Yoshiharu; Nakaoka, Hiroko; Hanazato, Masamichi

    2014-11-01

    Establishing methods for the assessment of fetal exposure to chemicals is important for the prevention or prediction of the child's future disease risk. In the present study, we aimed to determine the influence of molecular weight on the likelihood of chemical transfer from mother to fetus via the placenta. The correlation between molecular weight and placental transfer rates of congeners/isomers of polychlorinated biphenyls (PCBs) and dioxins was examined. Twenty-nine sample sets of maternal blood, umbilical cord, and umbilical cord blood were used to measure PCB concentration, and 41 sample sets were used to analyze dioxins. Placental transfer rates were calculated using the concentrations of PCBs, dioxins, and their congeners/isomers within these sample sets. Transfer rate correlated negatively with molecular weight for PCB congeners, normalized using wet and lipid weights. The transfer rates of PCB or dioxin congeners differed from those of total PCBs or dioxins. The transfer rate for dioxin congeners did not always correlate significantly with molecular weight, perhaps because of the small sample size or other factors. Further improvement of the analytical methods for dioxin congeners is required. The findings of the present study suggested that PCBs, dioxins, or their congeners with lower molecular weights are more likely to be transferred from mother to fetus via the placenta. Consideration of chemical molecular weight and transfer rate could therefore contribute to the assessment of fetal exposure. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  12. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  13. Near infrared spectra are more sensitive to land use changes than physical, chemical and biological soil properties

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Mataix-Solera, J.; Mataix-Beneyto, J.; Scow, K.

    2009-04-01

    We studied the sensibility of the near infrared spectra (NIR) of soils to the changes caused by land use, and we compared with the sensibility of different sets of physical, chemical and biological soil properties. For this purpose, we selected three land uses, constituted by forest, almond trees orchards, and orchards abandoned between 10 and 15 years previously to sampling. Sampling was carried out in four different locations from the province of Alicante (SE Spain). We used discriminant analysis (DA) using different sets of soil properties. The different sets tested in this study using DA were: (1) physical and chemical properties (organic carbon, total nitrogen, available phosphorus, pH, electrical conductivity, cation exchange capacity, aggregate stability, water holding capacity, and available Ca, Mg, K and Na), (2) biochemical properties (microbial biomass carbon, basal respiration and urease, phosphatase and β-glucosidase activities), (3) phospholipids fatty acids (PLFAs), (4) physical, chemical and biochemical properties (all properties of the previous sets), and (5) the NIR spectra of soils (scores of the principal components). In general, all sets of properties were sensible to land use. This was observed in the DAs by the separation (more or less clear) of samples in groups defined by land use (irrespective of site). The worst results were obtained using soil physical and chemical properties. The combination of physical, chemical and biological properties enhanced the separation of samples in groups, indicating higher sensibility. It is accepted than combination of properties of different nature is more effective to evaluate the soil quality. The microbial community structure (PLFAs) was highly sensible to the land use, grouping correctly the 100% of the samples according with the land use. The NIR spectra were also sensitive to land use. The scores of the first 5 components, which explained 99.97% of the variance, grouped correctly the 85% of the soil samples by land use, but were unable to group correctly the 100% of the samples. Surprisingly, when the scarce variance presents in components 5 to 40 was also used, the 100% of the samples were grouped by land use, as it was observed with PLFAs. But PLFAs analysis is expensive and time-consuming (some weeks). In contrast, only some minutes are needed for the obtainment of the NIR spectra. Additionally, no chemicals are need, decreasing the costs. The NIR spectrum of a soil contains relevant information about physical, chemical and biochemical properties. NIR spectrum could be considered as an integrated vision of soil quality, and as consequence offers an integrated vision of perturbations. Thus, NIR spectroscopy could be used as tool to monitoring soil quality in large areas. Acknowledgements: Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPRO"

  14. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  15. Ecological tolerances of Miocene larger benthic foraminifera from Indonesia

    NASA Astrophysics Data System (ADS)

    Novak, Vibor; Renema, Willem

    2018-01-01

    To provide a comprehensive palaeoenvironmental reconstruction based on larger benthic foraminifera (LBF), a quantitative analysis of their assemblage composition is needed. Besides microfacies analysis which includes environmental preferences of foraminiferal taxa, statistical analyses should also be employed. Therefore, detrended correspondence analysis and cluster analysis were performed on relative abundance data of identified LBF assemblages deposited in mixed carbonate-siliciclastic (MCS) systems and blue-water (BW) settings. Studied MCS system localities include ten sections from the central part of the Kutai Basin in East Kalimantan, ranging from late Burdigalian to Serravallian age. The BW samples were collected from eleven sections of the Bulu Formation on Central Java, dated as Serravallian. Results from detrended correspondence analysis reveal significant differences between these two environmental settings. Cluster analysis produced five clusters of samples; clusters 1 and 2 comprise dominantly MCS samples, clusters 3 and 4 with dominance of BW samples, and cluster 5 showing a mixed composition with both MCS and BW samples. The results of cluster analysis were afterwards subjected to indicator species analysis resulting in the interpretation that generated three groups among LBF taxa: typical assemblage indicators, regularly occurring taxa and rare taxa. By interpreting the results of detrended correspondence analysis, cluster analysis and indicator species analysis, along with environmental preferences of identified LBF taxa, a palaeoenvironmental model is proposed for the distribution of LBF in Miocene MCS systems and adjacent BW settings of Indonesia.

  16. Sample holder for X-ray diffractometry

    DOEpatents

    Hesch, Victor L.

    1992-01-01

    A sample holder for use with X-ray diffractometers with the capability to rotate the sample, as well as to adjust the position of the sample in the x, y, and z directions. Adjustment in the x direction is accomplished through loosening set screws, moving a platform, and retightening the set screws. Motion translators are used for adjustment in the y and z directions. An electric motor rotates the sample, and receives power from the diffractometer.

  17. Breakfast Clubs: Starting the Day in a Positive Way

    PubMed Central

    Graham, Pamela Louise; Russo, Riccardo; Defeyter, Margaret Anne

    2015-01-01

    Breakfast clubs are widely promoted as having a beneficial impact on children’s behavior at the start of the school day, which can be conducive to their learning within the classroom. However, the few available studies that have considered the impact of breakfast club attendance on children’s behavior have yielded mixed results and no studies to date have directly observed children’s behavior within the breakfast club setting. Using a combination of real-time observation and filmed breakfast club footage, the aims of the current study were to: (1) devise a set of observational criteria appropriate for use in the breakfast club setting; (2) investigate the occurrence of both positive and negative behaviors. A sample of 30 children aged between 3 and 11 years were recruited from 3, opportunistically sampled primary school breakfast clubs in the North East of England, UK. The behaviors they displayed within the breakfast club setting on two separate days were observed and coded for subsequent analysis. Results of the investigation showed that children’s behavior could be classified into three positive and three negative behavioral categories. Using these categories to code children’s behavior as they engaged in breakfast club showed that children displayed more positive than negative behaviors within the breakfast club setting and this was the case regardless of the type of activity (i.e., quiet or boisterous) children were involved in. Findings are discussed in relation to breakfast club policy, implementation, and evaluation. PMID:26217653

  18. Planetary protection issues for sample return missions.

    PubMed

    DeVincenzi, D L; Klein, H P

    1989-01-01

    Sample return missions from a comet nucleus and the Mars surface are currently under study in the US, USSR, and by ESA. Guidance on Planetary Protection (PP) issues is needed by mission scientists and engineers for incorporation into various elements of mission design studies. Although COSPAR has promulgated international policy on PP for various classes of solar system exploration missions, the applicability of this policy to sample return missions, in particular, remains vague. In this paper, we propose a set of implementing procedures to maintain the scientific integrity of these samples. We also propose that these same procedures will automatically assure that COSPAR-derived PP guidelines are achieved. The recommendations discussed here are the first step toward development of official COSPAR implementation requirements for sample return missions.

  19. Improved high-dimensional prediction with Random Forests by the use of co-data.

    PubMed

    Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A

    2017-12-28

    Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.

  20. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    PubMed

    Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu

    2017-01-01

    It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  1. Benchmarking contactless acquisition sensor reproducibility for latent fingerprint trace evidence

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Dittmann, Jana

    2015-03-01

    Optical, nano-meter range, contactless, non-destructive sensor devices are promising acquisition techniques in crime scene trace forensics, e.g. for digitizing latent fingerprint traces. Before new approaches are introduced in crime investigations, innovations need to be positively tested and quality ensured. In this paper we investigate sensor reproducibility by studying different scans from four sensors: two chromatic white light sensors (CWL600/CWL1mm), one confocal laser scanning microscope, and one NIR/VIS/UV reflection spectrometer. Firstly, we perform an intra-sensor reproducibility testing for CWL600 with a privacy conform test set of artificial-sweat printed, computer generated fingerprints. We use 24 different fingerprint patterns as original samples (printing samples/templates) for printing with artificial sweat (physical trace samples) and their acquisition with contactless sensory resulting in 96 sensor images, called scan or acquired samples. The second test set for inter-sensor reproducibility assessment consists of the first three patterns from the first test set, acquired in two consecutive scans using each device. We suggest using a simple feature space set in spatial and frequency domain known from signal processing and test its suitability for six different classifiers classifying scan data into small differences (reproducible) and large differences (non-reproducible). Furthermore, we suggest comparing the classification results with biometric verification scores (calculated with NBIS, with threshold of 40) as biometric reproducibility score. The Bagging classifier is nearly for all cases the most reliable classifier in our experiments and the results are also confirmed with the biometric matching rates.

  2. Comparative Assessment of a Self-sampling Device and Gynecologist Sampling for Cytology and HPV DNA Detection in a Rural and Low Resource Setting: Malaysian Experience.

    PubMed

    Latiff, Latiffah A; Ibrahim, Zaidah; Pei, Chong Pei; Rahman, Sabariah Abdul; Akhtari-Zavare, Mehrnoosh

    2015-01-01

    This study was conducted to assess the agreement and differences between cervical self-sampling with a Kato device (KSSD) and gynecologist sampling for Pap cytology and human papillomavirus DNA (HPV DNA) detection. Women underwent self-sampling followed by gynecologist sampling during screening at two primary health clinics. Pap cytology of cervical specimens was evaluated for specimen adequacy, presence of endocervical cells or transformation zone cells and cytological interpretation for cells abnormalities. Cervical specimens were also extracted and tested for HPV DNA detection. Positive HPV smears underwent gene sequencing and HPV genotyping by referring to the online NCBI gene bank. Results were compared between samplings by Kappa agreement and McNemar test. For Pap specimen adequacy, KSSD showed 100% agreement with gynecologist sampling but had only 32.3% agreement for presence of endocervical cells. Both sampling showed 100% agreement with only 1 case detected HSIL favouring CIN2 for cytology result. HPV DNA detection showed 86.2%agreement (K=0.64, 95% CI 0.524-0.756, p=0.001) between samplings. KSSD and gynaecologist sampling identified high risk HPV in 17.3% and 23.9% respectively (p= 0.014). The self-sampling using Kato device can serve as a tool in Pap cytology and HPV DNA detection in low resource settings in Malaysia. Self-sampling devices such as KSSD can be used as an alternative technique to gynaecologist sampling for cervical cancer screening among rural populations in Malaysia.

  3. Rapid and Portable Methods for Identification of Bacterially Influenced Calcite: Application of Laser-Induced Breakdown Spectroscopy and AOTF Reflectance Spectroscopy, Fort Stanton Cave, New Mexico

    NASA Astrophysics Data System (ADS)

    McMillan, N. J.; Chavez, A.; Chanover, N.; Voelz, D.; Uckert, K.; Tawalbeh, R.; Gariano, J.; Dragulin, I.; Xiao, X.; Hull, R.

    2014-12-01

    Rapid, in-situ methods for identification of biologic and non-biologic mineral precipitation sites permit mapping of biological hot spots. Two portable spectrometers, Laser-Induced Breakdown Spectroscopy (LIBS) and Acoustic-Optic Tunable Filter Reflectance Spectroscopy (AOTFRS) were used to differentiate between bacterially influenced and inorganically precipitated calcite specimens from Fort Stanton Cave, NM, USA. LIBS collects light emitted from the decay of excited electrons in a laser ablation plasma; the spectrum is a chemical fingerprint of the analyte. AOTFRS collects light reflected from the surface of a specimen and provides structural information about the material (i.e., the presence of O-H bonds). These orthogonal data sets provide a rigorous method to determine the origin of calcite in cave deposits. This study used a set of 48 calcite samples collected from Fort Stanton cave. Samples were examined in SEM for the presence of biologic markers; these data were used to separate the samples into biologic and non-biologic groups. Spectra were modeled using the multivariate technique Partial Least Squares Regression (PLSR). Half of the spectra were used to train a PLSR model, in which biologic samples were assigned to the independent variable "0" and non-biologic samples were assigned the variable "1". Values of the independent variable were calculated for each of the training samples, which were close to 0 for the biologic samples (-0.09 - 0.23) and close to 1 for the non-biologic samples (0.57 - 1.14). A Value of Apparent Distinction (VAD) of 0.55 was used to numerically distinguish between the two groups; any sample with an independent variable value < 0.55 was classified as having a biologic origin; a sample with a value > 0.55 was determined to be non-biologic in origin. After the model was trained, independent variable values for the remaining half of the samples were calculated. Biologic or non-biologic origin was assigned by comparison to the VAD. Using LIBS data alone, the model has a 92% success rate, correctly identifying 23 of 25 samples. Modeling of AOTFRS spectra and the combined LIBS-AOTFRS data set have similar success rates. This study demonstrates that rapid, portable LIBS and AOTFRS instruments can be used to map the spatial distribution of biologic precipitation in caves.

  4. Comparison of Collection Methods for Fecal Samples for Discovery Metabolomics in Epidemiologic Studies.

    PubMed

    Loftfield, Erikka; Vogtmann, Emily; Sampson, Joshua N; Moore, Steven C; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi

    2016-11-01

    The gut metabolome may be associated with the incidence and progression of numerous diseases. The composition of the gut metabolome can be captured by measuring metabolite levels in the feces. However, there are little data describing the effect of fecal sample collection methods on metabolomic measures. We collected fecal samples from 18 volunteers using four methods: no solution, 95% ethanol, fecal occult blood test (FOBT) cards, and fecal immunochemical test (FIT). One set of samples was frozen after collection (day 0), and for 95% ethanol, FOBT, and FIT, a second set was frozen after 96 hours at room temperature. We evaluated (i) technical reproducibility within sample replicates, (ii) stability after 96 hours at room temperature for 95% ethanol, FOBT, and FIT, and (iii) concordance of metabolite measures with the putative "gold standard," day 0 samples without solution. Intraclass correlation coefficients (ICC) estimating technical reproducibility were high for replicate samples for each collection method. ICCs estimating stability at room temperature were high for 95% ethanol and FOBT (median ICC > 0.87) but not FIT (median ICC = 0.52). Similarly, Spearman correlation coefficients (r s ) estimating metabolite concordance with the "gold standard" were higher for 95% ethanol (median r s = 0.82) and FOBT (median r s = 0.70) than for FIT (median r s = 0.40). Metabolomic measurements appear reproducible and stable in fecal samples collected with 95% ethanol or FOBT. Concordance with the "gold standard" is highest with 95% ethanol and acceptable with FOBT. Future epidemiologic studies should collect feces using 95% ethanol or FOBT if interested in studying fecal metabolomics. Cancer Epidemiol Biomarkers Prev; 25(11); 1483-90. ©2016 AACR. ©2016 American Association for Cancer Research.

  5. A meta-data based method for DNA microarray imputation.

    PubMed

    Jörnsten, Rebecka; Ouyang, Ming; Wang, Hui-Yu

    2007-03-29

    DNA microarray experiments are conducted in logical sets, such as time course profiling after a treatment is applied to the samples, or comparisons of the samples under two or more conditions. Due to cost and design constraints of spotted cDNA microarray experiments, each logical set commonly includes only a small number of replicates per condition. Despite the vast improvement of the microarray technology in recent years, missing values are prevalent. Intuitively, imputation of missing values is best done using many replicates within the same logical set. In practice, there are few replicates and thus reliable imputation within logical sets is difficult. However, it is in the case of few replicates that the presence of missing values, and how they are imputed, can have the most profound impact on the outcome of downstream analyses (e.g. significance analysis and clustering). This study explores the feasibility of imputation across logical sets, using the vast amount of publicly available microarray data to improve imputation reliability in the small sample size setting. We download all cDNA microarray data of Saccharomyces cerevisiae, Arabidopsis thaliana, and Caenorhabditis elegans from the Stanford Microarray Database. Through cross-validation and simulation, we find that, for all three species, our proposed imputation using data from public databases is far superior to imputation within a logical set, sometimes to an astonishing degree. Furthermore, the imputation root mean square error for significant genes is generally a lot less than that of non-significant ones. Since downstream analysis of significant genes, such as clustering and network analysis, can be very sensitive to small perturbations of estimated gene effects, it is highly recommended that researchers apply reliable data imputation prior to further analysis. Our method can also be applied to cDNA microarray experiments from other species, provided good reference data are available.

  6. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  7. Impact of hindcast length on estimates of seasonal climate predictability.

    PubMed

    Shi, W; Schaller, N; MacLeod, D; Palmer, T N; Weisheimer, A

    2015-03-16

    It has recently been argued that single-model seasonal forecast ensembles are overdispersive, implying that the real world is more predictable than indicated by estimates of so-called perfect model predictability, particularly over the North Atlantic. However, such estimates are based on relatively short forecast data sets comprising just 20 years of seasonal predictions. Here we study longer 40 year seasonal forecast data sets from multimodel seasonal forecast ensemble projects and show that sampling uncertainty due to the length of the hindcast periods is large. The skill of forecasting the North Atlantic Oscillation during winter varies within the 40 year data sets with high levels of skill found for some subperiods. It is demonstrated that while 20 year estimates of seasonal reliability can show evidence of overdispersive behavior, the 40 year estimates are more stable and show no evidence of overdispersion. Instead, the predominant feature on these longer time scales is underdispersion, particularly in the tropics. Predictions can appear overdispersive due to hindcast length sampling errorLonger hindcasts are more robust and underdispersive, especially in the tropicsTwenty hindcasts are an inadequate sample size to assess seasonal forecast skill.

  8. Legacy and currently used pesticides in the atmospheric environment of Lake Victoria, East Africa.

    PubMed

    Arinaitwe, Kenneth; Kiremire, Bernard T; Muir, Derek C G; Fellin, Phil; Li, Henrik; Teixeira, Camilla; Mubiru, Drake N

    2016-02-01

    The Lake Victoria watershed has extensive agricultural activity with a long history of pesticide use but there is limited information on historical use or on environmental levels. To address this data gap, high volume air samples were collected from two sites close to the northern shore of Lake Victoria; Kakira (KAK) and Entebbe (EBB). The samples, to be analyzed for pesticides, were collected over various periods between 1999 and 2004 inclusive (KAK 1999-2000, KAK 2003-2004, EBB 2003 and EBB 2004 sample sets) and from 2008 to 2010 inclusive (EBB 2008, EBB 2009 and EBB 2010 sample sets). The latter sample sets (which also included precipitation samples) were also analyzed for currently used pesticides (CUPs) including chlorpyrifos, chlorthalonil, metribuzin, trifluralin, malathion and dacthal. Chlorpyrifos was the predominant CUP in air samples with average concentrations of 93.5, 26.1 and 3.54 ng m(-3) for the EBB 2008, 2009, 2010 sample sets, respectively. Average concentrations of total endosulfan (ΣEndo), total DDT related compounds (ΣDDTs) and hexachlorocyclohexanes (ΣHCHs) ranged from 12.3-282, 22.8-130 and 3.72-81.8 pg m(-3), respectively, for all the sample sets. Atmospheric prevalence of residues of persistent organic pollutants (POPs) increased with fresh emissions of endosulfan, DDT and lindane. Hexachlorobenzene (HCB), pentachlorobenzene (PeCB) and dieldrin were also detected in air samples. Transformation products, pentachloroanisole, 3,4,5-trichloroveratrole and 3,4,5,6-tetrachloroveratrole, were also detected. The five most prevalent compounds in the precipitation samples were in the order chlorpyrifos>chlorothalonil>ΣEndo>ΣDDTs>ΣHCHs with average fluxes of 1123, 396, 130, 41.7 and 41.3 ng m(-2)sample(-1), respectively. PeCB exceeded HCB in precipitation samples. The reverse was true for air samples. Backward air trajectories suggested transboundary and local emission sources of the analytes. The results underscore the need for a concerted regional vigilance in management of chemicals. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    PubMed

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  10. Partial Least Squares Regression Can Aid in Detecting Differential Abundance of Multiple Features in Sets of Metagenomic Samples

    PubMed Central

    Libiger, Ondrej; Schork, Nicholas J.

    2015-01-01

    It is now feasible to examine the composition and diversity of microbial communities (i.e., “microbiomes”) that populate different human organs and orifices using DNA sequencing and related technologies. To explore the potential links between changes in microbial communities and various diseases in the human body, it is essential to test associations involving different species within and across microbiomes, environmental settings and disease states. Although a number of statistical techniques exist for carrying out relevant analyses, it is unclear which of these techniques exhibit the greatest statistical power to detect associations given the complexity of most microbiome datasets. We compared the statistical power of principal component regression, partial least squares regression, regularized regression, distance-based regression, Hill's diversity measures, and a modified test implemented in the popular and widely used microbiome analysis methodology “Metastats” across a wide range of simulated scenarios involving changes in feature abundance between two sets of metagenomic samples. For this purpose, simulation studies were used to change the abundance of microbial species in a real dataset from a published study examining human hands. Each technique was applied to the same data, and its ability to detect the simulated change in abundance was assessed. We hypothesized that a small subset of methods would outperform the rest in terms of the statistical power. Indeed, we found that the Metastats technique modified to accommodate multivariate analysis and partial least squares regression yielded high power under the models and data sets we studied. The statistical power of diversity measure-based tests, distance-based regression and regularized regression was significantly lower. Our results provide insight into powerful analysis strategies that utilize information on species counts from large microbiome data sets exhibiting skewed frequency distributions obtained on a small to moderate number of samples. PMID:26734061

  11. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  12. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  13. Impact of antibiotic administration on blood culture positivity at the beginning of sepsis: a prospective clinical cohort study.

    PubMed

    Scheer, Christian S; Fuchs, Christian; Gründling, Matthias; Vollmer, Marcus; Bast, Juliane; Bohnert, Jürgen A; Zimmermann, Kathrin; Hahnenkamp, Klaus; Rehberg, Sebastian; Kuhn, Sven-Olaf

    2018-06-04

    Sepsis guidelines recommend obtaining blood cultures before starting anti-infective therapy in patients with sepsis. However, little is known how antibiotic treatment prior to sampling affects bacterial growth. The aim of this study was to compare the results of blood cultures drawn prior to and under antibiotic therapy. Prospective clinical cohort study of septic patients. Adult ICU patients with 2 or 3 blood culture (BC) sets at the beginning of sepsis between 2010 and 2017 were included. Patients with blood culture samplings obtained prior to antibiotic therapy were compared to patients with samplings under antibiotic therapy. Blood culture positivity, defined as microbiological pathogen finding, was compared between the groups. Logistic regression was performed to adjust the impact of different factors with respect to blood culture positivity. In total, 559 patients with 1364 blood culture sets at the beginning of sepsis were analyzed. BC positivity was 50.6% (78/154) among septic patients who did not receive antibiotics and only 27.7% (112/405) in those who were already under antibiotics (P<0.001). Logistic regression revealed antibiotic therapy as an independent factor for less pathogen identification (Odds ratio 0.4; 95%CI 0.3-0.6). Gram-positive pathogens (28.3%(111/392) vs. 11.9%(116/972);P<0.001) and also gram-negative pathogens (16.3%(64/392) vs. 9.3%(90/972);P<0.001) were more frequent in BC sets drawn prior to antibiotic therapy compared to sets under antibiotics. Obtaining blood cultures under antibiotic therapy is associated with a significant loss of pathogen detection. This strongly emphasizes the current recommendation to obtain blood cultures prior to antibiotic administration in patients with sepsis. Copyright © 2018. Published by Elsevier Ltd.

  14. Methods, compounds and systems for detecting a microorganism in a sample

    DOEpatents

    Colston, Jr, Bill W.; Fitch, J. Patrick; Gardner, Shea N.; Williams, Peter L.; Wagner, Mark C.

    2016-09-06

    Methods to identify a set of probe polynucleotides suitable for detecting a set of targets and in particular methods for identification of primers suitable for detection of target microorganisms related polynucleotides, set of polynucleotides and compositions, and related methods and systems for detection and/or identification of microorganisms in a sample.

  15. Knowledge of Good Blood Culture Sampling Practice among Healthcare Staffs in An Emergency Department - Are We Getting It Right?

    PubMed

    Chew, K S; Mohd Hashairi, F; Jusoh, A F; Aziz, A A; Nik Hisamuddin, N A R; Siti Asma, H

    2013-08-01

    Although a vital test, blood culture is often plagued with the problem of contamination and false results, especially in a chaotic emergency department setting. The objectives of this pilot study is to find out the level of understanding among healthcare staffs in emergency department, Hospital Universiti Sains Malaysia (HUSM) regarding good blood culture sampling practice. All healthcare staffs in emergency department, HUSM who consented to this study were given a set of selfadministered anonymous questionnaire to fill. More than half (53.1%) of the 64 participants are emergency medicine residents. Majority of them (75%) have been working in the emergency medicine, HUSM for more than 2 years. More than half of them were able to answer correctly the amount of blood volume needed for culture in adult and pediatric patients. When asked what are the factors required to improve the true yield as well as to reduce the risk of culture contamination, the four commonest answers given were observing proper aseptic technique during blood sampling, donning sterile glove, proper hand scrubbing as well as ensuring the sterility of the equipments. This study suggests that there is a lack of proper knowledge of good blood culture sampling practice among our healthcare staffs in emergency department.

  16. Photometric redshift analysis in the Dark Energy Survey Science Verification data

    DOE PAGES

    Sanchez, C.; Carrasco Kind, M.; Lin, H.; ...

    2014-10-09

    In this study, we present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early 2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method inmore » a multidimensional colour–magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. In addition, empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions σ68 ~ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets.« less

  17. 16 CFR § 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  18. Monitoring Progress in Vocal Development in Young Cochlear Implant Recipients: Relationships between Speech Samples and Scores from the Conditioned Assessment of Speech Production (CASP)

    PubMed Central

    Ertmer, David J.; Jung, Jongmin

    2012-01-01

    Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109

  19. Study on the Factors Affecting the Mechanical Behavior of Electron Beam Melted Ti6Al4V

    NASA Astrophysics Data System (ADS)

    Pirozzi, Carmine; Franchitti, Stefania; Borrelli, Rosario; Caiazzo, Fabrizia; Alfieri, Vittorio; Argenio, Paolo

    2017-09-01

    In this study, a mechanical characterization has been performed on EBM built Ti-6Al-4V tensile samples. The results of tensile tests have shown a different behavior between two sets of specimens: as built and machined ones. Supporting investigations have been carried out in order to physically explain the statistical difference of mechanical performances. Cylindrical samples which represent the tensile specimens geometry have been EBM manufactured and then investigated in their as built conditions from macrostructural and microstructural point of view. In order to make robust this study, cylindrical samples have been EBM manufactured with different size and at different height from build plate. The reason of this choice was arisen from the need of understanding if other factors as the massivity and specific location could affect the microstructure and defects generations consequently influencing the mechanical behavior of the EBMed components. The results of this study have proved that the irregularity of external circular surfaces of examined cylinders, reducing significantly the true cross section withstanding the applied load, has given a comprehensive physical explanation of the different tensile behavior of the two sets of tensile specimens.

  20. Sample entropy analysis of cervical neoplasia gene-expression signatures

    PubMed Central

    Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R

    2009-01-01

    Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110

Top