Sample records for sample set consisted

  1. Detection and Genotyping of Human Papillomavirus in Self-Obtained Cervicovaginal Samples by Using the FTA Cartridge: New Possibilities for Cervical Cancer Screening ▿

    PubMed Central

    Lenselink, Charlotte H.; de Bie, Roosmarie P.; van Hamont, Dennis; Bakkers, Judith M. J. E.; Quint, Wim G. V.; Massuger, Leon F. A. G.; Bekkers, Ruud L. M.; Melchers, Willem J. G.

    2009-01-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF10-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening. PMID:19553570

  2. Detection and genotyping of human papillomavirus in self-obtained cervicovaginal samples by using the FTA cartridge: new possibilities for cervical cancer screening.

    PubMed

    Lenselink, Charlotte H; de Bie, Roosmarie P; van Hamont, Dennis; Bakkers, Judith M J E; Quint, Wim G V; Massuger, Leon F A G; Bekkers, Ruud L M; Melchers, Willem J G

    2009-08-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF(10)-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening.

  3. Characterization of Full Set Material Constants and Their Temperature Dependence for Piezoelectric Materials Using Resonant Ultrasound Spectroscopy

    PubMed Central

    Tang, Liguo; Cao, Wenwu

    2016-01-01

    During the operation of high power electromechanical devices, a temperature rise is unavoidable due to mechanical and electrical losses, causing the degradation of device performance. In order to evaluate such degradations using computer simulations, full matrix material properties at elevated temperatures are needed as inputs. It is extremely difficult to measure such data for ferroelectric materials due to their strong anisotropic nature and property variation among samples of different geometries. Because the degree of depolarization is boundary condition dependent, data obtained by the IEEE (Institute of Electrical and Electronics Engineers) impedance resonance technique, which requires several samples with drastically different geometries, usually lack self-consistency. The resonant ultrasound spectroscopy (RUS) technique allows the full set material constants to be measured using only one sample, which can eliminate errors caused by sample to sample variation. A detailed RUS procedure is demonstrated here using a lead zirconate titanate (PZT-4) piezoceramic sample. In the example, the complete set of material constants was measured from room temperature to 120 °C. Measured free dielectric constants and  were compared with calculated ones based on the measured full set data, and piezoelectric constants d15 and d33 were also calculated using different formulas. Excellent agreement was found in the entire range of temperatures, which confirmed the self-consistency of the data set obtained by the RUS. PMID:27168336

  4. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  5. Y-chromosomal diversity of the Valachs from the Czech Republic: model for isolated population in Central Europe

    PubMed Central

    Ehler, Edvard; Vaněk, Daniel; Stenzl, Vlastimil; Vančata, Václav

    2011-01-01

    Aim To evaluate Y-chromosomal diversity of the Moravian Valachs of the Czech Republic and compare them with a Czech population sample and other samples from Central and South-Eastern Europe, and to evaluate the effects of genetic isolation and sampling. Methods The first sample set of the Valachs consisted of 94 unrelated male donors from the Valach region in northeastern Czech Republic border-area. The second sample set of the Valachs consisted of 79 men who originated from 7 paternal lineages defined by surname. No close relatives were sampled. The third sample set consisted of 273 unrelated men from the whole of the Czech Republic and was used for comparison, as well as published data for other 27 populations. The total number of samples was 3244. Y-short tandem repeat (STR) markers were typed by standard methods using PowerPlex® Y System (Promega) and Yfiler® Amplification Kit (Applied Biosystems) kits. Y-chromosomal haplogroups were estimated from the haplotype information. Haplotype diversity and other intra- and inter-population statistics were computed. Results The Moravian Valachs showed a lower genetic variability of Y-STR markers than other Central European populations, resembling more to the isolated Balkan populations (Aromuns, Csango, Bulgarian, and Macedonian Roma) than the surrounding populations (Czechs, Slovaks, Poles, Saxons). We illustrated the effect of sampling on Valach paternal lineages, which includes reduction of discrimination capacity and variability inside Y-chromosomal haplogroups. Valach modal haplotype belongs to R1a haplogroup and it was not detected in the Czech population. Conclusion The Moravian Valachs display strong substructure and isolation in their Y chromosomal markers. They represent a unique Central European population model for population genetics. PMID:21674832

  6. Family Socioeconomic Status and Consistent Environmental Stimulation in Early Childhood

    PubMed Central

    Crosnoe, Robert; Leventhal, Tama; Wirth, R. J.; Pierce, Kim M.; Pianta, Robert

    2010-01-01

    The transition into school occurs at the intersection of multiple environmental settings. This study applied growth curve modeling to a sample of 1,364 American children, followed from birth through age six, who had been categorized by their exposure to cognitive stimulation at home and in preschool child care and first grade classrooms. Of special interest was the unique and combined contribution to early learning of these three settings. Net of socioeconomic selection into different settings, children had higher math achievement when they were consistently stimulated in all three, and they had higher reading achievement when consistently stimulated at home and in child care. The observed benefits of consistent environmental stimulation tended to be more pronounced for low-income children. PMID:20573117

  7. An historically consistent and broadly applicable MRV system based on LiDAR sampling and Landsat time-series

    Treesearch

    W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang

    2014-01-01

    The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...

  8. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Blood data set contains analytical results for measurements of up to 2 metals in 86 blood samples over 86 households. Each sample was collected as a venous sample from the primary respondent within each household. The samples consisted of two 3-mL tubes. The prim...

  9. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--PESTICIDE METABOLITES IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Pesticide Metabolites in Urine data set contains the analytical results for measurements of up to 8 pesticide metabolites in 86 samples over 86 households. Each sample was collected form the primary respondent within each household. The sample consists of the first morning ...

  10. NHEXAS PHASE I ARIZONA STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 6 metals in 176 urine samples over 176 households. Each sample was collected from the primary respondent within each household during Stage III of the NHEXAS study. The sample consists of the fir...

  11. Characterization and electron-energy-loss spectroscopy on NiV and NiMo superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahmood, S.H.

    1986-01-01

    NiV superlattices with periods (A) ranging from 15 to 80 A, and NiMo superlattices with from 14 to 110 A were studied using X-ray Diffraction (XRD), Electron Diffraction (ED), Energy-Dispersive X-Ray (EDX) microanalysis, and Electron Energy Loss Spectroscopy (EELS). Both of these systems have sharp superlattice-to-amorphous (S-A) transitions at about empty set = 17A. Superlattices with empty set around the S-A boundary were found to have large local variations in the in-plane grain sizes. Except for a few isolated regions, the chemical composition of the samples were found to be uniform. In samples prepared at Argonne National Laboratory (ANL), mostmore » places studied with EELS showed changes in the EELS spectrum with decreasing empty set. An observed growth in a plasmon peak at approx. 10ev in both NiV and NiMo as empty set decreased down to 19 A is attributed to excitation of interface plasmons. Consistent with this attribution, the peak height shrank in the amorphous samples. The width of this peak is consistent with the theory. The sift in this peak down to 9 ev with decreasing empty set in NiMo is not understood.« less

  12. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN URINE ANALYTICAL RESULTS

    EPA Science Inventory

    The Metals in Urine data set contains analytical results for measurements of up to 7 metals in 86 urine samples over 86 households. Each sample was collected from the primary respondent within each household. The sample consists of the first morning void following the 24-hour d...

  13. Development of a nano-tesla magnetic field shielded chamber and highly precise AC-susceptibility measurement coil at μK temperatures

    NASA Astrophysics Data System (ADS)

    Kumar, Anil; Prakash, Om; Ramakrishanan, S.

    2014-04-01

    A special sample measurement chamber has been developed to perform experiments at ultralow temperatures and ultralow magnetic field. A high permeability material known as cryoperm 10 and Pb is used to shield the measurement space consisting of the signal detecting set-up and the sample. The detecting setup consists of a very sensitive susceptibility coil wound on OFHC Cu bobbin.

  14. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  15. Evaluation of airborne asbestos exposure from routine handling of asbestos-containing wire gauze pads in the research laboratory.

    PubMed

    Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D

    2018-07-01

    Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.

  16. Organisational Capability in Internalising Quality Culture in Higher Institution

    ERIC Educational Resources Information Center

    Bello, Muhammad Ibrahim; Ibrahim, Burhan Muhammad Bn; Bularafa, Mohammed Waziri

    2015-01-01

    The study examines the influence of leadership roles related to organisational capability consisting of directing setting, strategic and organisational process, alignment, intervention and strategic capability on depending variable internalising quality culture in IIUM. The study used 100 samples consisting of lecturers, non-academic staff and…

  17. The Affective Reactivity Index: a concise irritability scale for clinical and research settings

    PubMed Central

    Stringaris, Argyris; Goodman, Robert; Ferdinando, Sumudu; Razdan, Varun; Muhrer, Eli; Leibenluft, Ellen; Brotman, Melissa A

    2012-01-01

    Background Irritable mood has recently become a matter of intense scientific interest. Here, we present data from two samples, one from the United States and the other from the United Kingdom, demonstrating the clinical and research utility of the parent- and self-report forms of the Affective Reactivity Index (ARI), a concise dimensional measure of irritability. Methods The US sample (n = 218) consisted of children and adolescents recruited at the National Institute of Mental Health meeting criteria for bipolar disorder (BD, n = 39), severe mood dysregulation (SMD, n = 67), children at family risk for BD (n = 35), or were healthy volunteers (n = 77). The UK sample (n = 88) was comprised of children from a generic mental health setting and healthy volunteers from primary and secondary schools. Results Parent- and self-report scales of the ARI showed excellent internal consistencies and formed a single factor in the two samples. In the US sample, the ARI showed a gradation with irritability significantly increasing from healthy volunteers through to SMD. Irritability was significantly higher in SMD than in BD by parent-report, but this did not reach significance by self-report. In the UK sample, parent-rated irritability was differentially related to emotional problems. Conclusions Irritability can be measured using a concise instrument both in a highly specialized US, as well as a general UK child mental health setting. PMID:22574736

  18. Rotation Control In A Cylindrical Acoustic Levitator

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Allen, J. L.

    1988-01-01

    Second driver introduces net circulation around levitated sample. Two transducers produce two sets of equal counterrotating acoustic fields. By appropriate adjustment of amplitudes and phases in two transducers, total acoustic field made to consist of two unequal counterrotating fields, producing net torque on levitated sample.

  19. People Patterns: Statistics. Environmental Module for Use in a Mathematics Laboratory Setting.

    ERIC Educational Resources Information Center

    Zastrocky, Michael; Trojan, Arthur

    This module on statistics consists of 18 worksheets that cover such topics as sample spaces, mean, median, mode, taking samples, posting results, analyzing data, and graphing. The last four worksheets require the students to work with samples and use these to compare people's responses. A computer dating service is one result of this work.…

  20. Grain quality traits in a sorghum association mapping panel

    USDA-ARS?s Scientific Manuscript database

    Grain quality traits were analyzed in a diverse sorghum sample set which consisted of 174 sorghum lines (110 non-tannin lines and 64 tannin lines). These samples were previously grouped into five distinct genetic populations which made it possible to compare grain quality traits across the genetic g...

  1. Grain quality traits in sorghum association mapping panel

    USDA-ARS?s Scientific Manuscript database

    Grain quality traits were analyzed in a diverse sorghum sample set which consisted of 174 sorghum lines (110 non-tannin lines and 64 tannin lines). These samples were previously grouped into five distinct genetic populations which made it possible to compare grain quality traits across the genetic g...

  2. 2169 steel waveform experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furnish, Michael David; Alexander, C. Scott; Reinhart, William Dodd

    2012-11-01

    In support of LLNL efforts to develop multiscale models of a variety of materials, we have performed a set of eight gas gun impact experiments on 2169 steel (21% Cr, 6% Ni, 9% Mn, balance predominantly Fe). These experiments provided carefully controlled shock, reshock and release velocimetry data, with initial shock stresses ranging from 10 to 50 GPa (particle velocities from 0.25 to 1.05 km/s). Both windowed and free-surface measurements were included in this experiment set to increase the utility of the data set, as were samples ranging in thickness from 1 to 5 mm. Target physical phenomena included themore » elastic/plastic transition (Hugoniot elastic limit), the Hugoniot, any phase transition phenomena, and the release path (windowed and free-surface). The Hugoniot was found to be nearly linear, with no indications of the Fe phase transition. Releases were non-hysteretic, and relatively consistent between 3- and 5-mmthick samples (the 3 mm samples giving slightly lower wavespeeds on release). Reshock tests with explosively welded impactors produced clean results; those with glue bonds showed transient releases prior to the arrival of the reshock, reducing their usefulness for deriving strength information. The free-surface samples, which were steps on a single piece of steel, showed lower wavespeeds for thin (1 mm) samples than for thicker (2 or 4 mm) samples. A configuration used for the last three shots allows release information to be determined from these free surface samples. The sample strength appears to increase with stress from ~1 GPa to ~ 3 GPa over this range, consistent with other recent work but about 40% above the Steinberg model.« less

  3. Optimal condition sampling of infrastructure networks.

    DOT National Transportation Integrated Search

    2009-10-15

    Transportation infrastructure systems consist of spatially extensive and longlived sets of interconnected : facilities. Over the past two decades, several new nondestructive inspection technologies have been : developed and applied in collectin...

  4. Trail making task performance in inpatients with anorexia nervosa and bulimia nervosa.

    PubMed

    Vall, Eva; Wade, Tracey D

    2015-07-01

    Set-shifting inefficiencies have been consistently identified in adults with anorexia nervosa (AN). It is less clear to what degree similar inefficiencies are present in those with bulimia nervosa (BN). It is also unknown whether perfectionism is related to set-shifting performance. We employed a commonly used set-shifting measure, the Trail Making Test (TMT), to compare the performance of inpatients with AN and BN with a healthy control sample. We also investigated whether perfectionism predicted TMT scores. Only the BN sample showed significantly suboptimal performance, while the AN sample was indistinguishable from controls on all measures. There were no differences between the AN subtypes (restrictive or binge/purge), but group sizes were small. Higher personal standards perfectionism was associated with better TMT scores across groups. Higher concern over mistakes perfectionism predicted better accuracy in the BN sample. Further research into the set-shifting profile of individuals with BN or binge/purge behaviours is needed. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  5. Fluorescence Excitation Spectroscopy for Phytoplankton Species Classification Using an All-Pairs Method: Characterization of a System with Unexpectedly Low Rank.

    PubMed

    Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L

    2018-03-01

    An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.

  6. Behavior of Aluminum in Solid Propellant Combustion

    DTIC Science & Technology

    1982-06-01

    dry pressing 30% Valley Met H- 30 aluminum, 7% carnauba wax , and 63% 100 P AP. One sample was prepared using as received H-30, a second sample used pre...34propellant" formulations. The formulations included dry pressed AP/AI, and AP/AI/ Wax samples. Sandwiches were also prepared consisting of an aluminum...Binder flame instead of by aluminum exposure during accumulate break-up. Combustion of AP/AI/ Wax Samples A set of propellant samples were prepared by

  7. Application of the criteria for classification of existing chemicals as dangerous for the environment.

    PubMed

    Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J

    1995-11-01

    The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.

  8. Plasma Accelerator and Energy Conversion Research

    DTIC Science & Technology

    1982-10-29

    performance tests have been accomplished. A self-contained recirculating AMTEC device with a thermal to electric conversion efficiency of 19% has been...combined efficiency . These two match up particularly well, because thermionic conversion is a high temperature technique, whereas AMTEC is limited to...EXPERIENTAL: Samples: The samples were prepared with a high rate DC magnetron sputtering apparatus ( SFI model 1 ). The sample set consisted of four

  9. Some Tests of Response Membership in Acquired Equivalence Classes

    ERIC Educational Resources Information Center

    Urcuioli, Peter J.; Lionello-DeNolf, Karen; Michalek, Sarah; Vasconcelos, Marco

    2006-01-01

    Pigeons were trained on many-to-one matching in which pairs of samples, each consisting of a visual stimulus and a distinctive pattern of center-key responding, occasioned the same reinforced comparison choice. Acquired equivalence between the visual and response samples then was evaluated by reinforcing new comparison choices to one set of…

  10. Cross-Study Homogeneity of Psoriasis Gene Expression in Skin across a Large Expression Range

    PubMed Central

    Kerkof, Keith; Timour, Martin; Russell, Christopher B.

    2013-01-01

    Background In psoriasis, only limited overlap between sets of genes identified as differentially expressed (psoriatic lesional vs. psoriatic non-lesional) was found using statistical and fold-change cut-offs. To provide a framework for utilizing prior psoriasis data sets we sought to understand the consistency of those sets. Methodology/Principal Findings Microarray expression profiling and qRT-PCR were used to characterize gene expression in PP and PN skin from psoriasis patients. cDNA (three new data sets) and cRNA hybridization (four existing data sets) data were compared using a common analysis pipeline. Agreement between data sets was assessed using varying qualitative and quantitative cut-offs to generate a DEG list in a source data set and then using other data sets to validate the list. Concordance increased from 67% across all probe sets to over 99% across more than 10,000 probe sets when statistical filters were employed. The fold-change behavior of individual genes tended to be consistent across the multiple data sets. We found that genes with <2-fold change values were quantitatively reproducible between pairs of data-sets. In a subset of transcripts with a role in inflammation changes detected by microarray were confirmed by qRT-PCR with high concordance. For transcripts with both PN and PP levels within the microarray dynamic range, microarray and qRT-PCR were quantitatively reproducible, including minimal fold-changes in IL13, TNFSF11, and TNFRSF11B and genes with >10-fold changes in either direction such as CHRM3, IL12B and IFNG. Conclusions/Significance Gene expression changes in psoriatic lesions were consistent across different studies, despite differences in patient selection, sample handling, and microarray platforms but between-study comparisons showed stronger agreement within than between platforms. We could use cut-offs as low as log10(ratio) = 0.1 (fold-change = 1.26), generating larger gene lists that validate on independent data sets. The reproducibility of PP signatures across data sets suggests that different sample sets can be productively compared. PMID:23308107

  11. An Application-Based Discussion of Construct Validity and Internal Consistency Reliability.

    ERIC Educational Resources Information Center

    Taylor, Dianne L.; Campbell, Kathleen T.

    Several techniques for conducting studies of measurement integrity are explained and illustrated using a heuristic data set from a study of teachers' participation in decision making (D. L. Taylor, 1991). The sample consisted of 637 teachers. It is emphasized that validity and reliability are characteristics of data, and do not inure to tests as…

  12. Population-Based Preference Weights for the EQ-5D Health States Using the Visual Analogue Scale (VAS) in Iran.

    PubMed

    Goudarzi, Reza; Zeraati, Hojjat; Akbari Sari, Ali; Rashidian, Arash; Mohammad, Kazem

    2016-02-01

    Health-related quality of life (HRQoL) is used as a measure to valuate healthcare interventions and guide policy making. The EuroQol EQ-5D is a widely used generic preference-based instrument to measure Health-related quality of life. The objective of this study was to develop a value set of the EQ-5D health states for an Iranian population. This study is a cross-sectional study of Iranian populations. Our sample from Iranian populations consists out of 869 participants, who were selected for this study using a stratified probability sampling method. The sample was taken from individuals living in the city of Tehran and was stratified by age and gender from July to November 2013. Respondents valued 13 health states using the visual analogue scale (VAS) of the EQ-5D. Several fixed effects regression models were tested to predict the full set of health states. We selected the final model based on the logical consistency of the estimates, the sign and magnitude of the regression coefficients, goodness of fit, and parsimony. We also compared predicted values with a value set from similar studies in the UK and other countries. Our results show that the HRQoL does not vary among socioeconomic groups. Models at the individual level resulted in an additive model with all coefficients being statistically significant, R(2) = 0.55, a value of 0.75 for the best health state (11112), and a value of -0.074 for the worst health state (33333). The value set obtained for the study sample remarkably differs from those elicited in developed countries. This study is the first estimate for the EQ-5D value set based on the VAS in Iran. Given the importance of locally adapted value set the use of this value set can be recommended for future studies in Iran and In the EMRO regions.

  13. Root location in random trees: a polarity property of all sampling consistent phylogenetic models except one.

    PubMed

    Steel, Mike

    2012-10-01

    Neutral macroevolutionary models, such as the Yule model, give rise to a probability distribution on the set of discrete rooted binary trees over a given leaf set. Such models can provide a signal as to the approximate location of the root when only the unrooted phylogenetic tree is known, and this signal becomes relatively more significant as the number of leaves grows. In this short note, we show that among models that treat all taxa equally, and are sampling consistent (i.e. the distribution on trees is not affected by taxa yet to be included), all such models, except one (the so-called PDA model), convey some information as to the location of the ancestral root in an unrooted tree. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Targeted Data Extraction of the MS/MS Spectra Generated by Data-independent Acquisition: A New Concept for Consistent and Accurate Proteome Analysis*

    PubMed Central

    Gillet, Ludovic C.; Navarro, Pedro; Tate, Stephen; Röst, Hannes; Selevsek, Nathalie; Reiter, Lukas; Bonner, Ron; Aebersold, Ruedi

    2012-01-01

    Most proteomic studies use liquid chromatography coupled to tandem mass spectrometry to identify and quantify the peptides generated by the proteolysis of a biological sample. However, with the current methods it remains challenging to rapidly, consistently, reproducibly, accurately, and sensitively detect and quantify large fractions of proteomes across multiple samples. Here we present a new strategy that systematically queries sample sets for the presence and quantity of essentially any protein of interest. It consists of using the information available in fragment ion spectral libraries to mine the complete fragment ion maps generated using a data-independent acquisition method. For this study, the data were acquired on a fast, high resolution quadrupole-quadrupole time-of-flight (TOF) instrument by repeatedly cycling through 32 consecutive 25-Da precursor isolation windows (swaths). This SWATH MS acquisition setup generates, in a single sample injection, time-resolved fragment ion spectra for all the analytes detectable within the 400–1200 m/z precursor range and the user-defined retention time window. We show that suitable combinations of fragment ions extracted from these data sets are sufficiently specific to confidently identify query peptides over a dynamic range of 4 orders of magnitude, even if the precursors of the queried peptides are not detectable in the survey scans. We also show that queried peptides are quantified with a consistency and accuracy comparable with that of selected reaction monitoring, the gold standard proteomic quantification method. Moreover, targeted data extraction enables ad libitum quantification refinement and dynamic extension of protein probing by iterative re-mining of the once-and-forever acquired data sets. This combination of unbiased, broad range precursor ion fragmentation and targeted data extraction alleviates most constraints of present proteomic methods and should be equally applicable to the comprehensive analysis of other classes of analytes, beyond proteomics. PMID:22261725

  15. Limits of diagnostic accuracy of anti-hepatitis C virus antibodies detection by ELISA and immunoblot assay.

    PubMed

    Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G

    2002-07-01

    When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.

  16. Silica dust exposure: Effect of filter size to compliance determination

    NASA Astrophysics Data System (ADS)

    Amran, Suhaily; Latif, Mohd Talib; Khan, Md Firoz; Leman, Abdul Mutalib; Goh, Eric; Jaafar, Shoffian Amin

    2016-11-01

    Monitoring of respirable dust was performed using a set of integrated sampling system consisting of sampling pump attached with filter media and separating device such as cyclone or special cassette. Based on selected method, filter sizes are either 25 mm or 37 mm poly vinyl chloride (PVC) filter. The aim of this study was to compare performance of two types of filter during personal respirable dust sampling for silica dust under field condition. The comparison strategy focused on the final compliance judgment based on both dataset. Eight hour parallel sampling of personal respirable dust exposure was performed among 30 crusher operators at six quarries. Each crusher operator was attached with parallel set of integrated sampling train containing either 25 mm or 37 mm PVC filter. Each set consisted of standard flow SKC sampler, attached with SKC GS3 cyclone and 2 pieces cassette loaded with 5.0 µm of PVC filter. Samples were analyzed by gravimetric technique. Personal respirable dust exposure between the two types of filters indicated significant positive correlation (p < 0.05) with moderate relationship (r2 = 0.6431). Personal exposure based on 25 mm PVC filter indicated 0.1% non-compliance to overall data while 37 mm PVC filter indicated similar findings at 0.4 %. Both data showed similar arithmetic mean(AM) and geometric mean(GM). In overall we concluded that personal respirable dust exposure either based on 25mm or 37mm PVC filter will give similar compliance determination. Both filters are reliable to be used in respirable dust monitoring for silica dust related exposure.

  17. Differential geometry techniques for sets of nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Estabrook, Frank B.

    1990-01-01

    An attempt is made to show that the Cartan theory of partial differential equations can be a useful technique for applied mathematics. Techniques for finding consistent subfamilies of solutions that are generically rich and well-posed and for introducing potentials or other usefully consistent auxiliary fields are introduced. An extended sample calculation involving the Korteweg-de Vries equation is given.

  18. Measuring wildland fire leadership: the crewmember perceived leadership scale

    Treesearch

    Alexis L. Waldron; David P. Schary; Bradley J. Cardinal

    2015-01-01

    The aims of this research were to develop and test a scale used to measure leadership in wildland firefighting using two samples of USA wildland firefighters. The first collection of data occurred in the spring and early summer and consisted of an online survey. The second set of data was collected towards late summer and early fall (autumn). The second set of...

  19. 40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...

  20. Hierarchical cluster analysis of technical replicates to identify interferents in untargeted mass spectrometry metabolomics.

    PubMed

    Caesar, Lindsay K; Kvalheim, Olav M; Cech, Nadja B

    2018-08-27

    Mass spectral data sets often contain experimental artefacts, and data filtering prior to statistical analysis is crucial to extract reliable information. This is particularly true in untargeted metabolomics analyses, where the analyte(s) of interest are not known a priori. It is often assumed that chemical interferents (i.e. solvent contaminants such as plasticizers) are consistent across samples, and can be removed by background subtraction from blank injections. On the contrary, it is shown here that chemical contaminants may vary in abundance across each injection, potentially leading to their misidentification as relevant sample components. With this metabolomics study, we demonstrate the effectiveness of hierarchical cluster analysis (HCA) of replicate injections (technical replicates) as a methodology to identify chemical interferents and reduce their contaminating contribution to metabolomics models. Pools of metabolites with varying complexity were prepared from the botanical Angelica keiskei Koidzumi and spiked with known metabolites. Each set of pools was analyzed in triplicate and at multiple concentrations using ultraperformance liquid chromatography coupled to mass spectrometry (UPLC-MS). Before filtering, HCA failed to cluster replicates in the data sets. To identify contaminant peaks, we developed a filtering process that evaluated the relative peak area variance of each variable within triplicate injections. These interferent peaks were found across all samples, but did not show consistent peak area from injection to injection, even when evaluating the same chemical sample. This filtering process identified 128 ions that appear to originate from the UPLC-MS system. Data sets collected for a high number of pools with comparatively simple chemical composition were highly influenced by these chemical interferents, as were samples that were analyzed at a low concentration. When chemical interferent masses were removed, technical replicates clustered in all data sets. This work highlights the importance of technical replication in mass spectrometry-based studies, and presents a new application of HCA as a tool for evaluating the effectiveness of data filtering prior to statistical analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. VizieR Online Data Catalog: Search for extraterrestrial intelligence (Isaacson+, 2017)

    NASA Astrophysics Data System (ADS)

    Isaacson, H.; Siemion, A. P. V.; Marcy, G. W.; Lebofsky, M.; Price, D. C.; MacMahon, D.; Croft, S.; Deboer, D.; Hickish, J.; Werthimer, D.; Sheikh, S.; Hellbourg, G.; Enriquez, J. E.

    2017-08-01

    The stellar sample is defined by two selection criteria. The first is a volume-limited sample of stars within 5pc of the Sun. The second is a spectral class complete sample consisting of stars across the main sequence and some giant branch stars, all within 50pc. We combined the two sub-samples (5pc and 5-50pc) to produce the final set of 1709 target stars that are listed in Table 1. (1 data file).

  2. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  3. A new statistical distance scale for planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ali, Alaa; Ismail, H. A.; Alsolami, Z.

    2015-05-01

    In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.

  4. Breast Reference Set Application: Chris Li-FHCRC (2014) — EDRN Public Portal

    Cancer.gov

    This application proposes to use Reference Set #1. We request access to serum samples collected at the time of breast biopsy from subjects with IC (n=30) or benign disease without atypia (n=30). Statistical power: With 30 BC cases and 30 normal controls, a 25% difference in mean metabolite levels can be detected between groups with 80% power and α=0.05, assuming coefficients of variation of 30%, consistent with our past studies. These sample sizes appear sufficient to enable detection of changes similar in magnitude to those previously reported in pre-clinical (BC recurrence) specimens (20).

  5. Evaluation of setting time and flow properties of self-synthesize alginate impressions

    NASA Astrophysics Data System (ADS)

    Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia

    2018-02-01

    Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.

  6. Bayesian Integration and Classification of Composition C-4 Plastic Explosives Based on Time-of-Flight-Secondary Ion Mass Spectrometry and Laser Ablation-Inductively Coupled Plasma Mass Spectrometry.

    PubMed

    Mahoney, Christine M; Kelly, Ryan T; Alexander, Liz; Newburn, Matt; Bader, Sydney; Ewing, Robert G; Fahey, Albert J; Atkinson, David A; Beagley, Nathaniel

    2016-04-05

    Time-of-flight-secondary ion mass spectrometry (TOF-SIMS) and laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) were used for characterization and identification of unique signatures from a series of 18 Composition C-4 plastic explosives. The samples were obtained from various commercial and military sources around the country. Positive and negative ion TOF-SIMS data were acquired directly from the C-4 residue on Si surfaces, where the positive ion mass spectra obtained were consistent with the major composition of organic additives, and the negative ion mass spectra were more consistent with explosive content in the C-4 samples. Each series of mass spectra was subjected to partial least squares-discriminant analysis (PLS-DA), a multivariate statistical analysis approach which serves to first find the areas of maximum variance within different classes of C-4 and subsequently to classify unknown samples based on correlations between the unknown data set and the original data set (often referred to as a training data set). This method was able to successfully classify test samples of C-4, though with a limited degree of certainty. The classification accuracy of the method was further improved by integrating the positive and negative ion data using a Bayesian approach. The TOF-SIMS data was combined with a second analytical method, LA-ICPMS, which was used to analyze elemental signatures in the C-4. The integrated data were able to classify test samples with a high degree of certainty. Results indicate that this Bayesian integrated approach constitutes a robust classification method that should be employable even in dirty samples collected in the field.

  7. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Automation system for measurement of gamma-ray spectra of induced activity for multi-element high volume neutron activation analysis at the reactor IBR-2 of Frank Laboratory of Neutron Physics at the joint institute for nuclear research

    NASA Astrophysics Data System (ADS)

    Pavlov, S. S.; Dmitriev, A. Yu.; Chepurchenko, I. A.; Frontasyeva, M. V.

    2014-11-01

    The automation system for measurement of induced activity of gamma-ray spectra for multi-element high volume neutron activation analysis (NAA) was designed, developed and implemented at the reactor IBR-2 at the Frank Laboratory of Neutron Physics. The system consists of three devices of automatic sample changers for three Canberra HPGe detector-based gamma spectrometry systems. Each sample changer consists of two-axis of linear positioning module M202A by DriveSet company and disk with 45 slots for containers with samples. Control of automatic sample changer is performed by the Xemo S360U controller by Systec company. Positioning accuracy can reach 0.1 mm. Special software performs automatic changing of samples and measurement of gamma spectra at constant interaction with the NAA database.

  9. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  10. Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.

    PubMed

    Agrawal, Ankur; Elhanan, Gai

    2014-02-01

    To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Ability of Children with Learning Disabilities and Children with Autism Spectrum Disorder to Recognize Feelings from Facial Expressions and Body Language

    ERIC Educational Resources Information Center

    Girli, Alev; Dogmaz, Sila

    2018-01-01

    In this study, children with learning disability (LD) were compared with children with autism spectrum disorder (ASD) in terms of identifying emotions from photographs with certain face and body expressions. The sample consisted of a total of 82 children aged 7-19 years living in Izmir in Turkey. A total of 6 separate sets of slides, consisting of…

  12. Storm-water data for Bear Creek basin, Jackson County, Oregon 1977-78

    USGS Publications Warehouse

    Wittenberg, Loren A.

    1978-01-01

    Storm-water-quality samples were collected from four subbasins in the Bear Creek basin in southern Oregon. These subbasins vary in drainage size, channel slope, effective impervious area, and land use. Automatic waterquality samplers and precipitation and discharge gages were set up in each of the four subbasins. During the period October 1977 through May 1978, 19 sets of samples, including two base-flow samples, were collected. Fecal coliform bacteria colonies per 100-milliliter sample ranged from less than 1,000 to more than 1,000,000. Suspended-sediment concentrations ranged from less than 1 to more than 2,300 milligrams per liter. One subbasin consisting of downtown businesses and streets with heavy vehicular traffic was monitored for lead. Total lead values ranging from 100 to 1,900 micrograms per liter were measured during one storm event.

  13. The 2-degree Field Lensing Survey: photometric redshifts from a large new training sample to r < 19.5

    NASA Astrophysics Data System (ADS)

    Wolf, C.; Johnson, A. S.; Bilicki, M.; Blake, C.; Amon, A.; Erben, T.; Glazebrook, K.; Heymans, C.; Hildebrandt, H.; Joudaki, S.; Klaes, D.; Kuijken, K.; Lidman, C.; Marin, F.; Parkinson, D.; Poole, G.

    2017-04-01

    We present a new training set for estimating empirical photometric redshifts of galaxies, which was created as part of the 2-degree Field Lensing Survey project. This training set is located in a ˜700 deg2 area of the Kilo-Degree-Survey South field and is randomly selected and nearly complete at r < 19.5. We investigate the photometric redshift performance obtained with ugriz photometry from VST-ATLAS and W1/W2 from WISE, based on several empirical and template methods. The best redshift errors are obtained with kernel-density estimation (KDE), as are the lowest biases, which are consistent with zero within statistical noise. The 68th percentiles of the redshift scatter for magnitude-limited samples at r < (15.5, 17.5, 19.5) are (0.014, 0.017, 0.028). In this magnitude range, there are no known ambiguities in the colour-redshift map, consistent with a small rate of redshift outliers. In the fainter regime, the KDE method produces p(z) estimates per galaxy that represent unbiased and accurate redshift frequency expectations. The p(z) sum over any subsample is consistent with the true redshift frequency plus Poisson noise. Further improvements in redshift precision at r < 20 would mostly be expected from filter sets with narrower passbands to increase the sensitivity of colours to small changes in redshift.

  14. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  15. Results Of Routine Strip Effluent Hold Tank, Decontaminated Salt Solution Hold Tank, And Caustic Wash Tank Samples From Modular Caustic-Side Solvent Extraction Unit During Macrobatch 4 Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, T. B.; Fink, S. D.

    Strip Effluent Hold Tank (SEHT), Decontaminated Salt Solution Hold Tank (DSSHT), and Caustic Wash Tank (CWT) samples from several of the ?microbatches? of Integrated Salt Disposition Project (ISDP) Salt Batch (?Macrobatch?) 4 have been analyzed for {sup 238}Pu, {sup 90}Sr, {sup 137}Cs, and by inductively-coupled plasma emission spectroscopy (ICPES). Furthermore, samples from the CWT have been analyzed by a variety of methods to investigate a decline in the decontamination factor (DF) of the cesium observed at MCU. The results indicate good decontamination performance within process design expectations. While the data set is sparse, the results of this set and themore » previous set of results for Macrobatch 3 samples indicate generally consistent operations. There is no indication of a disruption in plutonium and strontium removal. The average cesium DF and concentration factor (CF) for samples obtained from Macrobatch 4 are slightly lower than for Macrobatch 3, but still well within operating parameters. The DSSHT samples show continued presence of titanium, likely from leaching of the monosodium titanate in Actinide Removal Process (ARP).« less

  16. Irradiation chamber and sample changer for biological samples

    NASA Astrophysics Data System (ADS)

    Kraft, G.; Daues, H. W.; Fischer, B.; Kopf, U.; Liebold, H. P.; Quis, D.; Stelzer, H.; Kiefer, J.; Schöpfer, F.; Schneider, E.; Weber, K.; Wulf, H.; Dertinger, H.

    1980-01-01

    This paper describes an irradiation system with which living cells of different origin are irradiated with heavy ion beams (18⩽ Z⩽92) at energies up to 10 MeV/amu. The system consists of a beam monitor connected to the vacuum system of the accelerator and the irradiation chamber, containing the biological samples under atmospheric pressure. The requirements and aims of the set up are discussed. The first results with saccharomyces cerevisiae and Chinese Hamster tissue cells are presented.

  17. Determination of the reduced matrix of the piezoelectric, dielectric, and elastic material constants for a piezoelectric material with C∞ symmetry.

    PubMed

    Sherrit, Stewart; Masys, Tony J; Wiederick, Harvey D; Mukherjee, Binu K

    2011-09-01

    We present a procedure for determining the reduced piezoelectric, dielectric, and elastic coefficients for a C(∞) material, including losses, from a single disk sample. Measurements have been made on a Navy III lead zirconate titanate (PZT) ceramic sample and the reduced matrix of coefficients for this material is presented. In addition, we present the transform equations, in reduced matrix form, to other consistent material constant sets. We discuss the propagation of errors in going from one material data set to another and look at the limitations inherent in direct calculations of other useful coefficients from the data.

  18. Wound healing outcomes: Using big data and a modified intent-to-treat method as a metric for reporting healing rates.

    PubMed

    Ennis, William J; Hoffman, Rachel A; Gurtner, Geoffrey C; Kirsner, Robert S; Gordon, Hanna M

    2017-08-01

    Chronic wounds are increasing in prevalence and are a costly problem for the US healthcare system and throughout the world. Typically outcomes studies in the field of wound care have been limited to small clinical trials, comparative effectiveness cohorts and attempts to extrapolate results from claims databases. As a result, outcomes in real world clinical settings may differ from these published studies. This study presents a modified intent-to-treat framework for measuring wound outcomes and measures the consistency of population based outcomes across two distinct settings. In this retrospective observational analysis, we describe the largest to date, cohort of patient wound outcomes derived from 626 hospital based clinics and one academic tertiary care clinic. We present the results of a modified intent-to-treat analysis of wound outcomes as well as demographic and descriptive data. After applying the exclusion criteria, the final analytic sample includes the outcomes from 667,291 wounds in the national sample and 1,788 wounds in the academic sample. We found a consistent modified intent to treat healing rate of 74.6% from the 626 clinics and 77.6% in the academic center. We recommend that a standard modified intent to treat healing rate be used to report wound outcomes to allow for consistency and comparability in measurement across providers, payers and healthcare systems. © 2017 by the Wound Healing Society.

  19. Portable detection system of vegetable oils based on laser induced fluorescence

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan; Mu, Taotao

    2015-11-01

    Food safety, especially edible oils, has attracted more and more attention recently. Many methods and instruments have emerged to detect the edible oils, which include oils classification and adulteration. It is well known than the adulteration is based on classification. Then, in this paper, a portable detection system, based on laser induced fluorescence, is proposed and designed to classify the various edible oils, including (olive, rapeseed, walnut, peanut, linseed, sunflower, corn oils). 532 nm laser modules are used in this equipment. Then, all the components are assembled into a module (100*100*25mm). A total of 700 sets of fluorescence data (100 sets of each type oil) are collected. In order to classify different edible oils, principle components analysis and support vector machine have been employed in the data analysis. The training set consisted of 560 sets of data (80 sets of each oil) and the test set consisted of 140 sets of data (20 sets of each oil). The recognition rate is up to 99%, which demonstrates the reliability of this potable system. With nonintrusive and no sample preparation characteristic, the potable system can be effectively applied for food detection.

  20. The family assessment device: an update.

    PubMed

    Mansfield, Abigail K; Keitner, Gabor I; Dealy, Jennifer

    2015-03-01

    The current study set out to describe family functioning scores of a contemporary community sample, using the Family Assessment Device (FAD), and to compare this to a currently help-seeking sample. The community sample consisted of 151 families who completed the FAD. The help-seeking sample consisted of 46 families who completed the FAD at their first family therapy appointment as part of their standard care at an outpatient family therapy clinic at an urban hospital. Findings suggest that FAD means from the contemporary community sample indicate satisfaction with family functioning, while FAD scores from the help-seeking sample indicate dissatisfaction with family functioning. In addition, the General Functioning scale of the FAD continues to correlate highly with all other FAD scales, except Behavior Control. The cut-off scores for the FAD indicating satisfaction or dissatisfaction by family members with their family functioning continue to be relevant and the FAD continues to be a useful tool to assess family functioning in both clinical and research contexts. © 2014 Family Process Institute.

  1. Methods to characterize environmental settings of stream and groundwater sampling sites for National Water-Quality Assessment

    USGS Publications Warehouse

    Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.

    2012-01-01

    Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.

  2. Principal coordinate analysis assisted chromatographic analysis of bacterial cell wall collection: A robust classification approach.

    PubMed

    Kumar, Keshav; Cava, Felipe

    2018-04-10

    In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Reliability and construct validity of the Participation in Life Activities Scale for children and adolescents with asthma: an instrument evaluation study.

    PubMed

    Kintner, Eileen K; Sikorskii, Alla

    2008-06-04

    The purpose of this study was to evaluate the reliability and construct validity of the Participation in Life Activities Scale, an instrument designed to measure older school-age child and early adolescent level of involvement in chosen pursuits. A cross-sectional design was used. The convenience sample consisted of 313 school-age children and early adolescents with asthma, ages 9-15 years. The self-report summative scale of interest is a 3-indicator survey. Higher scores are reflective of higher levels of participation. Internal consistency reliability and construct validity for the entire sample and sub groups of the sample were evaluated. The instrument was deemed sound for the entire sample as well as sub groups based on sex, race, age, socioeconomic status, and severity of illness. Cronbach's alpha coefficient for internal consistency reliability for the entire sample was .74. Exploratory factor analysis indicated a single component solution (loadings .79-.85) accounting for 66% of the explained variance. Construct validity was established by testing the posed relationship between participation in life activities scores and severity of illness. Confirmatory factor analysis revealed a good fit between the data and specified model, chi2(10, n = 302) = 8.074, p = .62. This instrument could be used (a) in clinical settings to diagnose restricted participation in desired activities, guide decision-making about treatment plans to increase participation, and motivate behavioral change in the management of asthma; and (b) in research settings to explore factors influencing and consequences of restricted and unrestricted participation, and as an outcome measure to evaluate the effectiveness of programs designed to foster child and early adolescent management of asthma.

  4. Multilaboratory trial for determination of ceftiofur residues in bovine and swine kidney and muscle, and bovine milk.

    PubMed

    Hornish, Rex E; Hamlow, Philip J; Brown, Scott A

    2003-01-01

    A multilaboratory trial for determining ceftiofur-related residues in bovine and swine kidney and muscle, and bovine milk was conducted following regulatory guidelines of the U.S. Food and Drug Administration, Center for Veterinary Medicine. The methods convert all desfuroylceftiofur-related residues containing the intact beta-lactam ring to desfuroylceftiofur acetamide to establish ceftiofur residues in tissues. Four laboratories analyzed 5 sets of samples for each tissue. Each sample set consisted of a control/blank sample and 3 control samples fortified with ceftiofur at 0.5 Rm, Rm, and 2 Rm, respectively, where Rm is the U.S. tolerance assigned for ceftiofur residue in each tissue/matrix: 0.100 microg/mL for milk, 8.0 microg/g for kidney (both species), 1.0 microg/g for bovine muscle, and 2.0 microg/g for swine muscle. Each sample set also contained 2 samples of incurred-residue tissues (one > Rm and one < Rm) from animals treated with ceftiofur hydrochloride. All laboratories completed the method trial after a familiarization phase and test of system suitability in which they demonstrated > 80% recovery in pretrial fortified test samples. Results showed that the methods met all acceptable performance criteria for recovery, accuracy, and precision. Although sample preparation was easy, solid-phase extraction cartridge performance must be carefully evaluated before samples are processed. The liquid chromatography detection system was easily set up; however, the elution profile may require slight modifications. The procedures could clearly differentiate between violative (> Rm) and nonviolative (< Rm) ceftiofur residues. Participating laboratories found the procedures suitable for ceftiofur residue determination.

  5. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    PubMed

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was <90% (and in many cases<50%). Consequently, if identifying as many existing infections as possible is important, then the criteria for considering a quarter positive should be a single colony (from a 0.01-mL milk sample) isolated (definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97% in the weekly data, suggesting that for most species, definition A may be acceptable. For coagulase-negative staphylococci, definitions B (2 colonies from a 0.01-mL milk sample) raised the Sp to 92 and 95% in the daily and weekly data, respectively. For "any organism," using definition B raised the Sp to 88 and 93% in the 2 data sets, respectively. The final choice of definition will depend on the objectives of study or control program for which the sample was collected. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Evidence supporting vertical transmission of Salmonella in dairy cattle

    USDA-ARS?s Scientific Manuscript database

    We set out to investigate whether Salmonella enterica could be recovered from various tissues of viable neonatal calves immediately following parturition. Eleven samples were aseptically collected from each of 20 calves and consisted of both left and right subiliac and prescapular lymph nodes (LN),...

  7. Designing testing service at baristand industri Medan’s liquid waste laboratory

    NASA Astrophysics Data System (ADS)

    Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.

    2018-03-01

    Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.

  8. Identifying a Probabilistic Boolean Threshold Network From Samples.

    PubMed

    Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya

    2018-04-01

    This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.

  9. Downslope coarsening in aeolian grainflows of the Navajo Sandstone

    NASA Astrophysics Data System (ADS)

    Loope, David B.; Elder, James F.; Sweeney, Mark R.

    2012-07-01

    Downslope coarsening in grainflows has been observed on present-day dunes and generated in labs, but few previous studies have examined vertical sorting in ancient aeolian grainflows. We studied the grainflow strata of the Jurassic Navajo Sandstone in the southern Utah portion of its outcrop belt from Zion National Park (west) to Coyote Buttes and The Dive (east). At each study site, thick sets of grainflow-dominated cross-strata that were deposited by large transverse dunes comprise the bulk of the Navajo Sandstone. We studied three stratigraphic columns, one per site, composed almost exclusively of aeolian cross-strata. For each column, samples were obtained from one grainflow stratum in each consecutive set of the column, for a total of 139 samples from thirty-two sets of cross-strata. To investigate grading perpendicular to bedding within individual grainflows, we collected fourteen samples from four superimposed grainflow strata at The Dive. Samples were analyzed with a Malvern Mastersizer 2000 laser diffraction particle analyser. The median grain size of grainflow samples ranges from fine sand (164 μm) to coarse sand (617 μm). Using Folk and Ward criteria, samples are well-sorted to moderately-well-sorted. All but one of the twenty-eight sets showed at least slight downslope coarsening, but in general, downslope coarsening was not as well-developed or as consistent as that reported in laboratory subaqueous grainflows. Because coarse sand should be quickly sequestered within preserved cross-strata when bedforms climb, grain-size studies may help to test hypotheses for the stacking of sets of cross-strata.

  10. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    PubMed Central

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  11. Provenance information as a tool for addressing engineered nanoparticle reproducibility challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.; Munusamy, Prabhakaran; Thrall, Brian D.

    Nanoparticles of various types are of increasing research and technological importance in biological and other applications. Difficulties in the production and delivery of nanoparticles with consistent and well defined properties appear in many forms and have a variety of causes. Among several issues are those associated with incomplete information about the history of particles involved in research studies including the synthesis method, sample history after synthesis including time and nature of storage and the detailed nature of any sample processing or modification. In addition, the tendency of particles to change with time or environmental condition suggests that the time betweenmore » analysis and application is important and some type of consistency or verification process can be important. The essential history of a set of particles can be identified as provenance information tells the origin or source of a batch of nano-objects along with information related to handling and any changes that may have taken place since it was originated. A record of sample provenance information for a set of particles can play a useful role in identifying some of the sources and decreasing the extent of particle variability and the observed lack of reproducibility observed by many researchers.« less

  12. Bed-sediment grain-size and morphologic data from Suisun, Grizzly, and Honker Bays, CA, 1998-2002

    USGS Publications Warehouse

    Hampton, Margaret A.; Snyder, Noah P.; Chin, John L.; Allison, Dan W.; Rubin, David M.

    2003-01-01

    The USGS Place Based Studies Program for San Francisco Bay investigates this sensitive estuarine system to aid in resource management. As part of the inter-disciplinary research program, the USGS collected side-scan sonar data and bed-sediment samples from north San Francisco Bay to characterize bed-sediment texture and investigate temporal trends in sedimentation. The study area is located in central California and consists of Suisun Bay, and Grizzly and Honker Bays, sub-embayments of Suisun Bay. During the study (1998-2002), the USGS collected three side-scan sonar data sets and approximately 300 sediment samples. The side-scan data revealed predominantly fine-grained material on the bayfloor. We also mapped five different bottom types from the data set, categorized as featureless, furrows, sand waves, machine-made, and miscellaneous. We performed detailed grain-size and statistical analyses on the sediment samples. Overall, we found that grain size ranged from clay to fine sand, with the coarsest material in the channels and finer material located in the shallow bays. Grain-size analyses revealed high spatial variability in size distributions in the channel areas. In contrast, the shallow regions exhibited low spatial variability and consistent sediment size over time.

  13. Farsi version of social skills rating system-secondary student form: cultural adaptation, reliability and construct validity.

    PubMed

    Eslami, Ahmad Ali; Amidi Mazaheri, Maryam; Mostafavi, Firoozeh; Abbasi, Mohamad Hadi; Noroozi, Ensieh

    2014-01-01

    Assessment of social skills is a necessary requirement to develop and evaluate the effectiveness of cognitive and behavioral interventions. This paper reports the cultural adaptation and psychometric properties of the Farsi version of the social skills rating system-secondary students form (SSRS-SS) questionnaire (Gresham and Elliot, 1990), in a normative sample of secondary school students. A two-phase design was used that phase 1 consisted of the linguistic adaptation and in phase 2, using cross-sectional sample survey data, the construct validity and reliability of the Farsi version of the SSRS-SS were examined in a sample of 724 adolescents aged from 13 to 19 years. Content validity index was excellent, and the floor/ceiling effects were low. After deleting five of the original SSRS-SS items, the findings gave support for the item convergent and divergent validity. Factor analysis revealed four subscales. RESULTS showed good internal consistency (0.89) and temporal stability (0.91) for the total scale score. Findings demonstrated support for the use of the 27-item Farsi version in the school setting. Directions for future research regarding the applicability of the scale in other settings and populations of adolescents are discussed.

  14. Systematic investigation of the relationship between high myopia and polymorphisms of the MMP2, TIMP2, and TIMP3 genes by a DNA pooling approach.

    PubMed

    Leung, Kim Hung; Yiu, Wai Chi; Yap, Maurice K H; Ng, Po Wah; Fung, Wai Yan; Sham, Pak Chung; Yip, Shea Ping

    2011-06-01

    This study examined the relationship between high myopia and three myopia candidate genes--matrix metalloproteinase 2 (MMP2) and tissue inhibitor of metalloproteinase-2 and -3 (TIMP2 and TIMP3)--involved in scleral remodeling. Recruited for the study were unrelated adult Han Chinese who were high myopes (spherical equivalent, ≤ -6.0 D in both eyes; cases) and emmetropes (within ±1.0 D in both eyes; controls). Sample set 1 had 300 cases and 300 controls, and sample set 2 had 356 cases and 354 controls. Forty-nine tag single-nucleotide polymorphisms (SNPs) were selected from these candidate genes. The first stage was an initial screen of six case pools and six control pools constructed from sample set 1, each pool consisting of 50 distinct subjects of the same affection status. In the second stage, positive SNPs from the first stage were confirmed by genotyping individual samples forming the DNA pools. In the third stage, positive SNPs from stage 2 were replicated, with sample set 2 genotyped individually. Of the 49 SNPs screened by DNA pooling, three passed the lenient threshold of P < 0.10 (nested ANOVA) and were followed up by individual genotyping. Of the three SNPs genotyped, two TIMP3 SNPs were found to be significantly associated with high myopia by single-marker or haplotype analysis. However, the initial positive results could not be replicated by sample set 2. MMP2, TIPM2, and TIMP3 genes were not associated with high myopia in this Chinese sample and hence are unlikely to play a major role in the genetic susceptibility to high myopia.

  15. 40 CFR 1066.410 - Dynamometer test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... drive mode. (For purposes of this paragraph (g), the term four-wheel drive includes other multiple drive... Dynamometer test procedure. (a) Dynamometer testing may consist of multiple drive cycles with both cold-start...-setting part identifies the driving schedules and the associated sample intervals, soak periods, engine...

  16. 77 FR 72829 - Marine Mammals; File No. 16305

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... Toxicology, Maine Center for Toxicology and Environmental Health, University of Southern Maine, 478 Science... turtle biological samples for scientific research purposes. ADDRESSES: The permit and related documents... consistent with the purposes and policies set forth in section 2 of the ESA. Documents may be reviewed in the...

  17. Mass calibration and cosmological analysis of the SPT-SZ galaxy cluster sample using velocity dispersion σ v and x-ray Y X measurements

    DOE PAGES

    Bocquet, S.; Saro, A.; Mohr, J. J.; ...

    2015-01-30

    Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less

  18. Mass Calibration and Cosmological Analysis of the SPT-SZ Galaxy Cluster Sample Using Velocity Dispersion σ v and X-Ray Y X Measurements

    NASA Astrophysics Data System (ADS)

    Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.

    2015-02-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).

  19. Validity of the SAT for Predicting First-Year Grades: 2008 SAT Validity Sample. Statistical Report No. 2011-5

    ERIC Educational Resources Information Center

    Patterson, Brian F.; Mattern, Krista D.

    2011-01-01

    The findings for the 2008 sample are largely consistent with the previous reports. SAT scores were found to be correlated with FYGPA (r = 0.54), with a magnitude similar to HSGPA (r = 0.56). The best set of predictors of FYGPA remains SAT scores and HSGPA (r = 0.63), as the addition of the SAT sections to the correlation of HSGPA alone with FYGPA…

  20. Measurements of heavy solar wind and higher energy solar particles during the Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Walker, R. M.; Zinner, E.; Maurette, M.

    1973-01-01

    The lunar surface cosmic ray experiment, consisting of sets of mica, glass, plastic, and metal foil detectors, was successfully deployed on the Apollo 17 mission. One set of detectors was exposed directly to sunlight and another set was placed in shade. Preliminary scanning of the mica detectors shows the expected registration of heavy solar wind ions in the sample exposed directly to the sun. The initial results indicate a depletion of very-heavy solar wind ions. The effect is probably not real but is caused by scanning inefficiencies. Despite the lack of any pronounced solar activity, energetic heavy particles with energies extending to 1 MeV/nucleon were observed. Equal track densities of approximately 6000 tracks/cm sq 0.5 microns in length were measured in mica samples exposed in both sunlight and shade.

  1. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  2. Water-quality data for water- and wastewater-treatment plants along the Red River of the North, North Dakota and Minnesota, January through October 2006

    USGS Publications Warehouse

    Damschen, William C.; Hansel, John A.; Nustad, Rochelle A.

    2008-01-01

    From January through October 2006, six sets of water-quality samples were collected at 28 sites, which included inflow and outflow from seven major municipal water-treatment plants (14 sites) and influent and effluent samples from seven major municipal wastewater treatment plants (14 sites) along the Red River of the North in North Dakota and Minnesota. Samples were collected in cooperation with the Bureau of Reclamation for use in the development of return-flow boundary conditions in a 2006 water-quality model for the Red River of the North. All samples were analyzed for nutrients and major ions. For one set of effluent samples from each of the wastewater-treatment plants, water was analyzed for Eschirichia coli, fecal coliform, 20-day biochemical oxygen demand, 20-day nitrogenous biochemical oxygen demand, total organic carbon, and dissolved organic carbon. In general, results from the field equipment blank and replicate samples indicate that the overall process of sample collection, processing, and analysis did not introduce substantial contamination and that consistent results were obtained.

  3. Tutorial: Crystal orientations and EBSD — Or which way is up?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britton, T.B., E-mail: b.britton@imperial.ac.uk; Jiang, J.; Guo, Y.

    2016-07-15

    Electron backscatter diffraction (EBSD) is an automated technique that can measure the orientation of crystals in a sample very rapidly. There are many sophisticated software packages that present measured data. Unfortunately, due to crystal symmetry and differences in the set-up of microscope and EBSD software, there may be accuracy issues when linking the crystal orientation to a particular microstructural feature. In this paper we outline a series of conventions used to describe crystal orientations and coordinate systems. These conventions have been used to successfully demonstrate that a consistent frame of reference is used in the sample, unit cell, pole figuremore » and diffraction pattern frames of reference. We establish a coordinate system rooted in measurement of the diffraction pattern and subsequently link this to all other coordinate systems. A fundamental outcome of this analysis is to note that the beamshift coordinate system needs to be precisely defined for consistent 3D microstructure analysis. This is supported through a series of case studies examining particular features of the microscope settings and/or unambiguous crystallographic features. These case studies can be generated easily in most laboratories and represent an opportunity to demonstrate confidence in use of recorded orientation data. Finally, we include a simple software tool, written in both MATLAB® and Python, which the reader can use to compare consistency with their own microscope set-up and which may act as a springboard for further offline analysis. - Highlights: • Presentation of conventions used to describe crystal orientations • Three case studies that outline how conventions are consistent • Demonstrates a pathway for calibration and validation of EBSD based orientation measurements • EBSD computer code supplied for validation by the reader.« less

  4. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  5. Risk Assessment with Adolescent Sex Offenders

    ERIC Educational Resources Information Center

    Christodoulides, T. E.; Richardson, G.; Graham, F.; Kennedy, P. J.; Kelly, T. P.

    2005-01-01

    The paper describes an evaluation of a risk assessment tool's effectiveness in distinguishing adolescent sexual offenders who had committed further sexual offences from those who had not. The sample consisted of 50 male adolescent sexual offenders referred to a forensic outpatient service within a healthcare setting. The adolescents within the…

  6. O'Neal Training Manual.

    ERIC Educational Resources Information Center

    Alabama State Dept. of Education, Montgomery.

    This training manual provides 42 lessons developed for a workplace literacy program at O'Neal Steel. Each lesson consists of a summary sheet with activities and corresponding materials and time; handout(s); pretest; instructor materials and samples; and worksheet(s). Activities in each lesson are set induction, guided practice, applied practice,…

  7. Using Soil Seed Banks for Ecological Education in Primary School

    ERIC Educational Resources Information Center

    Ju, Eun Jeong; Kim, Jae Geun

    2011-01-01

    In this study, we developed an educational programme using soil seed banks to promote ecological literacy among primary school-aged children. The programme consisted of seven student activities, including sampling and setting soil seed banks around the school, watering, identifying seedlings, and making observations about the plants and their…

  8. The Relationship Between Self Concept and Marital Adjustment.

    ERIC Educational Resources Information Center

    Hall, William M., Jr.; Valine, Warren J.

    The purpose of this study was to investigate the relationship between self concept and marital adjustment for married students and their spouses in a commuter college setting. The sample consisted of a random selection of 50 "both spouses commuting" couples, 50 "husband only commuting" couples, and 50 "wife only…

  9. Mass Spectrometry and Fourier Transform Infrared Spectroscopy for Analysis of Biological Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Timothy J.

    Time-of-flight mass spectrometry along with statistical analysis was utilized to study metabolic profiles among rats fed resistant starch (RS) diets. Fischer 344 rats were fed four starch diets consisting of 55% (w/w, dbs) starch. A control starch diet consisting of corn starch was compared against three RS diets. The RS diets were high-amylose corn starch (HA7), HA7 chemically modified with octenyl succinic anhydride, and stearic-acid-complexed HA7 starch. A subgroup received antibiotic treatment to determine if perturbations in the gut microbiome were long lasting. A second subgroup was treated with azoxymethane (AOM), a carcinogen. At the end of the eight weekmore » study, cecal and distal-colon contents samples were collected from the sacrificed rats. Metabolites were extracted from cecal and distal colon samples into acetonitrile. The extracts were then analyzed on an accurate-mass time-of-flight mass spectrometer to obtain their metabolic profile. The data were analyzed using partial least-squares discriminant analysis (PLS-DA). The PLS-DA analysis utilized a training set and verification set to classify samples within diet and treatment groups. PLS-DA could reliably differentiate the diet treatments for both cecal and distal colon samples. The PLS-DA analyses of the antibiotic and no antibiotic treated subgroups were well classified for cecal samples and modestly separated for distal-colon samples. PLS-DA analysis had limited success separating distal colon samples for rats given AOM from those not treated; the cecal samples from AOM had very poor classification. Mass spectrometry profiling coupled with PLS-DA can readily classify metabolite differences among rats given RS diets.« less

  10. High-resolution time-of-flight mass spectrometry fingerprinting of metabolites from cecum and distal colon contents of rats fed resistant starch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Timothy J.; Jones, Roger W.; Ai, Yongfeng

    Time-of-flight mass spectrometry along with statistical analysis was utilized to study metabolic profiles among rats fed resistant starch (RS) diets. Fischer 344 rats were fed four starch diets consisting of 55 % (w/w, dbs) starch. A control starch diet consisting of corn starch was compared against three RS diets. The RS diets were high-amylose corn starch (HA7), HA7 chemically modified with octenyl succinic anhydride, and stearic-acid-complexed HA7 starch. A subgroup received antibiotic treatment to determine if perturbations in the gut microbiome were long lasting. A second subgroup was treated with azoxymethane (AOM), a carcinogen. At the end of the 8-weekmore » study, cecal and distal colon content samples were collected from the sacrificed rats. Metabolites were extracted from cecal and distal colon samples into acetonitrile. The extracts were then analyzed on an accurate-mass time-of-flight mass spectrometer to obtain their metabolic profile. The data were analyzed using partial least-squares discriminant analysis (PLS-DA). The PLS-DA analysis utilized a training set and verification set to classify samples within diet and treatment groups. PLS-DA could reliably differentiate the diet treatments for both cecal and distal colon samples. The PLS-DA analyses of the antibiotic and no antibiotic-treated subgroups were well classified for cecal samples and modestly separated for distal colon samples. PLS-DA analysis had limited success separating distal colon samples for rats given AOM from those not treated; the cecal samples from AOM had very poor classification. Mass spectrometry profiling coupled with PLS-DA can readily classify metabolite differences among rats given RS diets.« less

  11. A Trade Study and Metric for Penetration and Sampling Devices for Possible Use on the NASA 2003 and 2005 Mars Sample Return Missions

    NASA Technical Reports Server (NTRS)

    McConnell, Joshua B.

    2000-01-01

    The scientific exploration of Mars will require the collection and return of subterranean samples to Earth for examination. This necessitates the use of some type of device or devices that possesses the ability to effectively penetrate the Martian surface, collect suitable samples and return them to the surface in a manner consistent with imposed scientific constraints. The first opportunity for such a device will occur on the 2003 and 2005 Mars Sample Return missions, being performed by NASA. This paper reviews the work completed on the compilation of a database containing viable penetrating and sampling devices, the performance of a system level trade study comparing selected devices to a set of prescribed parameters and the employment of a metric for the evaluation and ranking of the traded penetration and sampling devices, with respect to possible usage on the 03 and 05 sample return missions. The trade study performed is based on a select set of scientific, engineering, programmatic and socio-political criterion. The use of a metric for the various penetration and sampling devices will act to expedite current and future device selection.

  12. Estimation of wood density and chemical composition by means of diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) spectroscopy.

    PubMed

    Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek

    2006-01-11

    Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.

  13. A Potential Natural Treatment for Attention-Deficit/Hyperactivity Disorder: Evidence From a National Study

    PubMed Central

    Kuo, Frances E.; Faber Taylor, Andrea

    2004-01-01

    Objectives. We examined the impact of relatively “green” or natural settings on attention-deficit/hyperactivity disorder (ADHD) symptoms across diverse subpopulations of children. Methods. Parents nationwide rated the aftereffects of 49 common after-school and weekend activities on children’s symptoms. Aftereffects were compared for activities conducted in green outdoor settings versus those conducted in both built outdoor and indoor settings. Results. In this national, nonprobability sample, green outdoor activities reduced symptoms significantly more than did activities conducted in other settings, even when activities were matched across settings. Findings were consistent across age, gender, and income groups; community types; geographic regions; and diagnoses. Conclusions. Green outdoor settings appear to reduce ADHD symptoms in children across a wide range of individual, residential, and case characteristics. PMID:15333318

  14. Dried blood spot measurement of pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) from a low-resource setting.

    PubMed

    Browne, J L; Schielen, P C J I; Belmouden, I; Pennings, J L A; Klipstein-Grobusch, K

    2015-06-01

    The objectives of the article is to compare pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) concentrations in dried blood spots (DBSs) with serum of samples obtained from a public hospital in a low-resource setting and to evaluate their stability. Serum and DBS samples were obtained by venipuncture and finger prick from 50 pregnant participants in a cohort study in a public hospital in Accra, Ghana. PAPP-A and β-hCG concentrations from serum and DBS were measured with an AutoDELFIA® (PerkinElmer, PerkinElmer, Turku, Finland) automatic immunoassay. Correlation and Passing-Bablok regression analyses were performed to compare marker levels. High correlation (>0.9) was observed for PAPP-A and β-hCG levels between various sampling techniques. The β-hCG concentration was stable between DBS and serum, PAPP-A concentration consistently lower in DBS. Our findings suggest that β-hCG can be reliably collected from DBS in low-resource tropical settings. The exact conditions of the clinical workflow necessary for reliable PAPP-A measurement in these settings need to be further developed in the future. These findings could have implications for prenatal screening programs feasibility in low-income and middle-income countries, as DBS provides an alternative minimally invasive sampling method, with advantages in sampling technique, stability, logistics, and potential application in low-resource settings. © 2015 John Wiley & Sons, Ltd.

  15. Dataset of producing and curing concrete using domestic treated wastewater

    PubMed Central

    Asadollahfardi, Gholamreza; Delnavaz, Mohammad; Rashnoiee, Vahid; Fazeli, Alireza; Gonabadi, Navid

    2015-01-01

    We tested the setting time of cement, slump and compressive and tensile strength of 54 triplicate cubic samples and 9 cylindrical samples of concrete with and without a Super plasticizer admixture. We produced concrete samples made with drinking water and treated domestic wastewater containing 300, 400 kg/m3 of cement before chlorination and then cured concrete samples made with drinking water and treated wastewater. Second, concrete samples made with 350 kg/m3 of cement with a Superplasticizer admixture made with drinking water and treated wastewater and then cured with treated wastewater. The compressive strength of all the concrete samples made with treated wastewater had a high coefficient of determination with the control concrete samples. A 28-day tensile strength of all the samples was 96–100% of the tensile strength of the control samples and the setting time was reduced by 30 min which was consistent with a ASTMC191 standard. All samples produced and cured with treated waste water did not have a significant effect on water absorption, slump and surface electrical resistivity tests. However, compressive strength at 21 days of concrete samples using 300 kg/m3 of cement in rapid freezing and thawing conditions was about 11% lower than concrete samples made with drinking water. PMID:26862577

  16. Dataset of producing and curing concrete using domestic treated wastewater.

    PubMed

    Asadollahfardi, Gholamreza; Delnavaz, Mohammad; Rashnoiee, Vahid; Fazeli, Alireza; Gonabadi, Navid

    2016-03-01

    We tested the setting time of cement, slump and compressive and tensile strength of 54 triplicate cubic samples and 9 cylindrical samples of concrete with and without a Super plasticizer admixture. We produced concrete samples made with drinking water and treated domestic wastewater containing 300, 400 kg/m(3) of cement before chlorination and then cured concrete samples made with drinking water and treated wastewater. Second, concrete samples made with 350 kg/m(3) of cement with a Superplasticizer admixture made with drinking water and treated wastewater and then cured with treated wastewater. The compressive strength of all the concrete samples made with treated wastewater had a high coefficient of determination with the control concrete samples. A 28-day tensile strength of all the samples was 96-100% of the tensile strength of the control samples and the setting time was reduced by 30 min which was consistent with a ASTMC191 standard. All samples produced and cured with treated waste water did not have a significant effect on water absorption, slump and surface electrical resistivity tests. However, compressive strength at 21 days of concrete samples using 300 kg/m(3) of cement in rapid freezing and thawing conditions was about 11% lower than concrete samples made with drinking water.

  17. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    PubMed

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  18. Hierarchical Structure of the Eysenck Personality Inventory in a Large Population Sample: Goldberg's Trait-Tier Mapping Procedure

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Barrett, Paul; Duberstein, Paul

    2014-01-01

    The structure of the Eysenck Personality Inventory (EPI) is poorly understood, and applications have mostly been confined to the broad Neuroticism, Extraversion, and Lie scales. Using a hierarchical factoring procedure, we mapped the sequential differentiation of EPI scales from broad, molar factors to more specific, molecular factors, in a UK population sample of over 6500 persons. Replicable facets at the lowest tier of Neuroticism included emotional fragility, mood lability, nervous tension, and rumination. The lowest order set of replicable Extraversion facets consisted of social dynamism, sociotropy, decisiveness, jocularity, social information seeking, and impulsivity. The Lie scale consisted of an interpersonal virtue and a behavioral diligence facet. Users of the EPI may be well served in some circumstances by considering its broad Neuroticism, Extraversion, and Lie scales as multifactorial, a feature that was explicitly incorporated into subsequent Eysenck inventories and is consistent with other hierarchical trait structures. PMID:25983361

  19. Deriving Sediment Interstitial Water Remediation Goals ...

    EPA Pesticide Factsheets

    This document contains a methodology for developing interstitial water remediation goals (IWRGs) for nonionic organic pollutants (toxicants) in sediments for the protection of benthic organisms. The document provides the basis for using the final chronic values (FCVs) from EPA’s aquatic water quality criteria (AWQC) for the protection of aquatic life to set the IWRGs for toxicants in sediments. Concentrations of the toxicants in the sediment interstitial water are measured using passive sampling. This document also discusses how to evaluate the consistency between passive sampling measurements and sediment toxicity test results. When these data are consistent, one can be reasonably assured that the causes of toxicity to benthic organisms in the sediment have been correctly identified and that the developed IWRGs for the toxicants will be protective of the benthic organisms at the site. The consistency evaluation is an important step in developing defensible IWRGs. To assist in developing defensible IWRGs.

  20. Inference of combinatorial Boolean rules of synergistic gene sets from cancer microarray datasets.

    PubMed

    Park, Inho; Lee, Kwang H; Lee, Doheon

    2010-06-15

    Gene set analysis has become an important tool for the functional interpretation of high-throughput gene expression datasets. Moreover, pattern analyses based on inferred gene set activities of individual samples have shown the ability to identify more robust disease signatures than individual gene-based pattern analyses. Although a number of approaches have been proposed for gene set-based pattern analysis, the combinatorial influence of deregulated gene sets on disease phenotype classification has not been studied sufficiently. We propose a new approach for inferring combinatorial Boolean rules of gene sets for a better understanding of cancer transcriptome and cancer classification. To reduce the search space of the possible Boolean rules, we identify small groups of gene sets that synergistically contribute to the classification of samples into their corresponding phenotypic groups (such as normal and cancer). We then measure the significance of the candidate Boolean rules derived from each group of gene sets; the level of significance is based on the class entropy of the samples selected in accordance with the rules. By applying the present approach to publicly available prostate cancer datasets, we identified 72 significant Boolean rules. Finally, we discuss several identified Boolean rules, such as the rule of glutathione metabolism (down) and prostaglandin synthesis regulation (down), which are consistent with known prostate cancer biology. Scripts written in Python and R are available at http://biosoft.kaist.ac.kr/~ihpark/. The refined gene sets and the full list of the identified Boolean rules are provided in the Supplementary Material. Supplementary data are available at Bioinformatics online.

  1. Studying the time trend of Methicillin-resistant Staphylococcus aureus (MRSA) in Norway by use of non-stationary γ-Poisson distributions.

    PubMed

    Moxnes, John F; Moen, Aina E Fossum; Leegaard, Truls Michael

    2015-10-05

    Study the time development of methicillin-resistant Staphylococcus aureus (MRSA) and forecast future behaviour. The major question: Is the number of MRSA isolates in Norway increasing and will it continue to increase? Time trend analysis using non-stationary γ-Poisson distributions. Two data sets were analysed. The first data set (data set I) consists of all MRSA isolates collected in Oslo County from 1997 to 2010; the study area includes the Norwegian capital of Oslo and nearby surrounding areas, covering approximately 11% of the Norwegian population. The second data set (data set II) consists of all MRSA isolates collected in Health Region East from 2002 to 2011. Health Region East consists of Oslo County and four neighbouring counties, and is the most populated area of Norway. Both data sets I and II consist of all persons in the area and time period described in the Settings, from whom MRSA have been isolated. MRSA infections have been mandatory notifiable in Norway since 1995, and MRSA colonisation since 2004. In the time period studied, all bacterial samples in Norway have been sent to a medical microbiological laboratory at the regional hospital for testing. In collaboration with the regional hospitals in five counties, we have collected all MRSA findings in the South-Eastern part of Norway over long time periods. On an average, a linear or exponential increase in MRSA numbers was observed in the data sets. A Poisson process with increasing intensity did not capture the dispersion of the time series, but a γ-Poisson process showed good agreement and captured the overdispersion. The numerical model showed numerical internal consistency. In the present study, we find that the number of MRSA isolates is increasing in the most populated area of Norway during the time period studied. We also forecast a continuous increase until the year 2017. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Factor structure of the Japanese version of the Edinburgh Postnatal Depression Scale in the postpartum period.

    PubMed

    Kubota, Chika; Okada, Takashi; Aleksic, Branko; Nakamura, Yukako; Kunimoto, Shohko; Morikawa, Mako; Shiino, Tomoko; Tamaji, Ai; Ohoka, Harue; Banno, Naomi; Morita, Tokiko; Murase, Satomi; Goto, Setsuko; Kanai, Atsuko; Masuda, Tomoko; Ando, Masahiko; Ozaki, Norio

    2014-01-01

    The Edinburgh Postnatal Depression Scale (EPDS) is a widely used screening tool for postpartum depression (PPD). Although the reliability and validity of EPDS in Japanese has been confirmed and the prevalence of PPD is found to be about the same as Western countries, the factor structure of the Japanese version of EPDS has not been elucidated yet. 690 Japanese mothers completed all items of the EPDS at 1 month postpartum. We divided them randomly into two sample sets. The first sample set (n = 345) was used for exploratory factor analysis, and the second sample set was used (n = 345) for confirmatory factor analysis. The result of exploratory factor analysis indicated a three-factor model consisting of anxiety, depression and anhedonia. The results of confirmatory factor analysis suggested that the anxiety and anhedonia factors existed for EPDS in a sample of Japanese women at 1 month postpartum. The depression factor varies by the models of acceptable fit. We examined EPDS scores. As a result, "anxiety" and "anhedonia" exist for EPDS among postpartum women in Japan as already reported in Western countries. Cross-cultural research is needed for future research.

  3. The EIPeptiDi tool: enhancing peptide discovery in ICAT-based LC MS/MS experiments.

    PubMed

    Cannataro, Mario; Cuda, Giovanni; Gaspari, Marco; Greco, Sergio; Tradigo, Giuseppe; Veltri, Pierangelo

    2007-07-15

    Isotope-coded affinity tags (ICAT) is a method for quantitative proteomics based on differential isotopic labeling, sample digestion and mass spectrometry (MS). The method allows the identification and relative quantification of proteins present in two samples and consists of the following phases. First, cysteine residues are either labeled using the ICAT Light or ICAT Heavy reagent (having identical chemical properties but different masses). Then, after whole sample digestion, the labeled peptides are captured selectively using the biotin tag contained in both ICAT reagents. Finally, the simplified peptide mixture is analyzed by nanoscale liquid chromatography-tandem mass spectrometry (LC-MS/MS). Nevertheless, the ICAT LC-MS/MS method still suffers from insufficient sample-to-sample reproducibility on peptide identification. In particular, the number and the type of peptides identified in different experiments can vary considerably and, thus, the statistical (comparative) analysis of sample sets is very challenging. Low information overlap at the peptide and, consequently, at the protein level, is very detrimental in situations where the number of samples to be analyzed is high. We designed a method for improving the data processing and peptide identification in sample sets subjected to ICAT labeling and LC-MS/MS analysis, based on cross validating MS/MS results. Such a method has been implemented in a tool, called EIPeptiDi, which boosts the ICAT data analysis software improving peptide identification throughout the input data set. Heavy/Light (H/L) pairs quantified but not identified by the MS/MS routine, are assigned to peptide sequences identified in other samples, by using similarity criteria based on chromatographic retention time and Heavy/Light mass attributes. EIPeptiDi significantly improves the number of identified peptides per sample, proving that the proposed method has a considerable impact on the protein identification process and, consequently, on the amount of potentially critical information in clinical studies. The EIPeptiDi tool is available at http://bioingegneria.unicz.it/~veltri/projects/eipeptidi/ with a demo data set. EIPeptiDi significantly increases the number of peptides identified and quantified in analyzed samples, thus reducing the number of unassigned H/L pairs and allowing a better comparative analysis of sample data sets.

  4. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  5. Role of the urate transporter SLC2A9 gene in susceptibility to gout in New Zealand Māori, Pacific Island, and Caucasian case-control sample sets.

    PubMed

    Hollis-Moffatt, Jade E; Xu, Xin; Dalbeth, Nicola; Merriman, Marilyn E; Topless, Ruth; Waddell, Chloe; Gow, Peter J; Harrison, Andrew A; Highton, John; Jones, Peter B B; Stamp, Lisa K; Merriman, Tony R

    2009-11-01

    To examine the role of genetic variation in the renal urate transporter SLC2A9 in gout in New Zealand sample sets of Māori, Pacific Island, and Caucasian ancestry and to determine if the Māori and Pacific Island samples could be useful for fine-mapping. Patients (n= 56 Māori, 69 Pacific Island, and 131 Caucasian) were recruited from rheumatology outpatient clinics and satisfied the American College of Rheumatology criteria for gout. The control samples comprised 125 Māori subjects, 41 Pacific Island subjects, and 568 Caucasian subjects without arthritis. SLC2A9 single-nucleotide polymorphisms rs16890979 (V253I), rs5028843, rs11942223, and rs12510549 were genotyped (possible etiologic variants in Caucasians). Association of the major allele of rs16890979, rs11942223, and rs5028843 with gout was observed in all sample sets (P = 3.7 x 10(-7), 1.6 x 10(-6), and 7.6 x 10(-5) for rs11942223 in the Māori, Pacific Island, and Caucasian samples, respectively). One 4-marker haplotype (1/1/2/1; more prevalent in the Māori and Pacific Island control samples) was not observed in a single gout case. Our data confirm a role of SLC2A9 in gout susceptibility in a New Zealand Caucasian sample set, with the effect on risk (odds ratio >2.0) greater than previous estimates. We also demonstrate association of SLC2A9 with gout in samples of Māori and Pacific Island ancestry and a consistent pattern of haplotype association. The presence of both alleles of rs16890979 on susceptibility and protective haplotypes in the Māori and Pacific Island sample is evidence against a role for this nonsynonymous variant as the sole etiologic agent. More extensive linkage disequilibrium in Māori and Pacific Island samples suggests that Caucasian samples may be more useful for fine-mapping.

  6. The variability of pH in convective storms

    Treesearch

    Richard G. Semonin

    1976-01-01

    The rainwater pH was measured in a total of 22 storms which occurred in 1972 and 1974 in the METROMEX (METROpolitan Meteorological EXperiment) rainwater sampling network. The network consists of 81 collectors in an area of 1800 km² over and east of St. Louis, Missouri. The data set is composed of dry...

  7. Creating the spatial framework for National Aquatic Resource Surveys (NARS): Melding National Aquatic Data Sets with Survey Requirements

    EPA Science Inventory

    The U.S. EPA’s National Aquatic Resource Surveys (NARS) require a consistent spatial representation of the resource target populations being monitored (i.e., rivers and streams, lakes, coastal waters, and wetlands). A sample frame is the GIS representation of this target popula...

  8. The Role of Emotion Expectancies in Adolescents' Moral Decision Making

    ERIC Educational Resources Information Center

    Krettenauer, Tobias; Jia, Fanli; Mosleh, Maureen

    2011-01-01

    This study investigated the impact of emotion expectancies on adolescents' moral decision making in hypothetical situations. The sample consisted of 160 participants from three different grade levels (mean age=15.79 years, SD=2.96). Participants were confronted with a set of scenarios that described various emotional outcomes of (im)moral actions…

  9. Preschool Teachers' Professional Background, Process Quality, and Job Attitudes: A Person-Centered Approach

    ERIC Educational Resources Information Center

    Jeon, Lieny; Buettner, Cynthia K.; Hur, Eunhye

    2016-01-01

    Research Findings: This exploratory study identified preschool teacher quality profiles in early childhood education settings using 9 indicators across teachers' professional background, observed process quality, and job attitudes toward teaching (e.g., job-related stress, satisfaction, and intention to leave the job). The sample consisted of 96…

  10. Investigating the Incremental Validity of Cognitive Variables in Early Mathematics Screening

    ERIC Educational Resources Information Center

    Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K.; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith

    2018-01-01

    The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression…

  11. User's Guide to the Stand Prognosis Model

    Treesearch

    William R. Wykoff; Nicholas L. Crookston; Albert R. Stage

    1982-01-01

    The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...

  12. Dynamic sample size detection in learning command line sequence for continuous authentication.

    PubMed

    Traore, Issa; Woungang, Isaac; Nakkabi, Youssef; Obaidat, Mohammad S; Ahmed, Ahmed Awad E; Khalilian, Bijan

    2012-10-01

    Continuous authentication (CA) consists of authenticating the user repetitively throughout a session with the goal of detecting and protecting against session hijacking attacks. While the accuracy of the detector is central to the success of CA, the detection delay or length of an individual authentication period is important as well since it is a measure of the window of vulnerability of the system. However, high accuracy and small detection delay are conflicting requirements that need to be balanced for optimum detection. In this paper, we propose the use of sequential sampling technique to achieve optimum detection by trading off adequately between detection delay and accuracy in the CA process. We illustrate our approach through CA based on user command line sequence and naïve Bayes classification scheme. Experimental evaluation using the Greenberg data set yields encouraging results consisting of a false acceptance rate (FAR) of 11.78% and a false rejection rate (FRR) of 1.33%, with an average command sequence length (i.e., detection delay) of 37 commands. When using the Schonlau (SEA) data set, we obtain FAR = 4.28% and FRR = 12%.

  13. Genetic Analysis of Association Between Calcium Signaling and Hippocampal Activation, Memory Performance in the Young and Old, and Risk for Sporadic Alzheimer Disease.

    PubMed

    Heck, Angela; Fastenrath, Matthias; Coynel, David; Auschra, Bianca; Bickel, Horst; Freytag, Virginie; Gschwind, Leo; Hartmann, Francina; Jessen, Frank; Kaduszkiewicz, Hanna; Maier, Wolfgang; Milnik, Annette; Pentzek, Michael; Riedel-Heller, Steffi G; Spalek, Klara; Vogler, Christian; Wagner, Michael; Weyerer, Siegfried; Wolfsgruber, Steffen; de Quervain, Dominique J-F; Papassotiropoulos, Andreas

    2015-10-01

    Human episodic memory performance is linked to the function of specific brain regions, including the hippocampus; declines as a result of increasing age; and is markedly disturbed in Alzheimer disease (AD), an age-associated neurodegenerative disorder that primarily affects the hippocampus. Exploring the molecular underpinnings of human episodic memory is key to the understanding of hippocampus-dependent cognitive physiology and pathophysiology. To determine whether biologically defined groups of genes are enriched in episodic memory performance across age, memory encoding-related brain activity, and AD. In this multicenter collaborative study, which began in August 2008 and is ongoing, gene set enrichment analysis was done by using primary and meta-analysis data from 57 968 participants. The Swiss cohorts consisted of 3043 healthy young adults assessed for episodic memory performance. In a subgroup (n = 1119) of one of these cohorts, functional magnetic resonance imaging was used to identify gene set-dependent differences in brain activity related to episodic memory. The German Study on Aging, Cognition, and Dementia in Primary Care Patients cohort consisted of 763 elderly participants without dementia who were assessed for episodic memory performance. The International Genomics of Alzheimer's Project case-control sample consisted of 54 162 participants (17 008 patients with sporadic AD and 37 154 control participants). Analyses were conducted between January 2014 and June 2015. Gene set enrichment analysis in all samples was done using genome-wide single-nucleotide polymorphism data. Episodic memory performance in the Swiss cohort and German Study on Aging, Cognition, and Dementia in Primary Care Patients cohort was quantified by picture and verbal delayed free recall tasks. In the functional magnetic resonance imaging experiment, activation of the hippocampus during encoding of pictures served as the phenotype of interest. In the International Genomics of Alzheimer's Project sample, diagnosis of sporadic AD served as the phenotype of interest. In the discovery sample, we detected significant enrichment for genes constituting the calcium signaling pathway, especially those related to the elevation of cytosolic calcium (P = 2 × 10-4). This enrichment was replicated in 2 additional samples of healthy young individuals (P = .02 and .04, respectively) and a sample of healthy elderly participants (P = .004). Hippocampal activation (P = 4 × 10-4) and the risk for sporadic AD (P = .01) were also significantly enriched for genes related to the elevation of cytosolic calcium. By detecting consistent significant enrichment in independent cohorts of young and elderly participants, this study identified that calcium signaling plays a central role in hippocampus-dependent human memory processes in cognitive health and disease, contributing to the understanding and potential treatment of hippocampus-dependent cognitive pathology.

  14. Use of ProteinChip technology for identifying biomarkers of parasitic diseases: the example of porcine cysticercosis (Taenia solium).

    PubMed

    Deckers, N; Dorny, P; Kanobana, K; Vercruysse, J; Gonzalez, A E; Ward, B; Ndao, M

    2008-12-01

    Taenia solium cysticercosis is a significant public health problem in endemic countries. The current serodiagnostic techniques are not able to differentiate between infections with viable cysts and infections with degenerated cysts. The objectives of this study were to identify specific novel biomarkers of these different disease stages in the serum of experimentally infected pigs using ProteinChip technology (Bio-Rad) and to validate these biomarkers by analyzing serum samples from naturally infected pigs. In the experimental sample set 30 discriminating biomarkers (p<0.05) were found, 13 specific for the viable phenotype, 9 specific for the degenerated phenotype and 8 specific for the infected phenotype (either viable or degenerated cysts). Only 3 of these biomarkers were also significant in the field samples; however, the peak profiles were not consistent among the two sample sets. Five biomarkers discovered in the sera from experimentally infected pigs were identified as clusterin, lecithin-cholesterol acyltransferase, vitronectin, haptoglobin and apolipoprotein A-I.

  15. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  16. Method for outlier detection: a tool to assess the consistency between laboratory data and ultraviolet-visible absorbance spectra in wastewater samples.

    PubMed

    Zamora, D; Torres, A

    2014-01-01

    Reliable estimations of the evolution of water quality parameters by using in situ technologies make it possible to follow the operation of a wastewater treatment plant (WWTP), as well as improving the understanding and control of the operation, especially in the detection of disturbances. However, ultraviolet (UV)-Vis sensors have to be calibrated by means of a local fingerprint laboratory reference concentration-value data-set. The detection of outliers in these data-sets is therefore important. This paper presents a method for detecting outliers in UV-Vis absorbances coupled to water quality reference laboratory concentrations for samples used for calibration purposes. Application to samples from the influent of the San Fernando WWTP (Medellín, Colombia) is shown. After the removal of outliers, improvements in the predictability of the influent concentrations using absorbance spectra were found.

  17. Performance audits and laboratory comparisons for SCOS97-NARSTO measurements of speciated volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Fujita, Eric M.; Harshfield, Gregory; Sheetz, Laurence

    Performance audits and laboratory comparisons were conducted as part of the quality assurance program for the 1997 Southern California Ozone Study (SCOS97-NARSTO) to document potential measurement biases among laboratories measuring speciated nonmethane hydrocarbons (NMHC), carbonyl compounds, halogenated compounds, and biogenic hydrocarbons. The results show that measurements of volatile organic compounds (VOC) made during SCOS97-NARSTO are generally consistent with specified data quality objectives. The hydrocarbon comparison involved nine laboratories and consisted of two sets of collocated ambient samples. The coefficients of variation among laboratories for the sum of the 55 PAM target compounds and total NMHC ranged from ±5 to 15 percent for ambient samples from Los Angeles and Azusa. Abundant hydrocarbons are consistently identified by all laboratories, but discrepancies occur for olefins greater than C 4 and for hydrocarbons greater than C 8. Laboratory comparisons for halogenated compounds and biogenic hydrocarbons consisted of both concurrent ambient sampling by different laboratories and round-robin analysis of ambient samples. The coefficients of variation among participating laboratories were about 10-20 percent. Performance audits were conducted for measurement of carbonyl compounds involving sampling from a standard mixture of carbonyl compounds. The values reported by most of the laboratories were within 10-20 percent of those of the reference laboratory. Results of field measurement comparisons showed larger variations among the laboratories ranging from 20 to 40 percent for C 1-C 3 carbonyl compounds. The greater variations observed in the field measurement comparison may reflect potential sampling artifacts, which the performance audits did not address.

  18. Pb isotope compositions of modern deep sea turbidites

    NASA Astrophysics Data System (ADS)

    Hemming, S. R.; McLennan, S. M.

    2001-01-01

    Modern deep sea turbidite muds and sands collected from Lamont piston cores represent a large range in age of detrital sources as well as a spectrum of tectonic settings. Pb isotope compositions of all but three of the 66 samples lie to the right of the 4.56 Ga Geochron, and most also lie along a slope consistent with a time-integrated κ ( 232Th/ 238U) between 3.8 and 4.2. Modern deep sea turbidites show a predictable negative correlation between both Pb and Sr isotope ratios and ɛNd and ɛHf, clearly related to the age of continental sources. However, the consistency between Pb and Nd isotopes breaks down for samples with very old provenance ( ɛNd<-20) that are far less radiogenic than predicted by the negative correlation. The correlations among Sr, Nd and Hf isotopes also become more scattered in samples with very old provenance. The unradiogenic Pb isotopic character of modern sediments with Archean Nd model ages is consistent with a model where Th and U abundances of the Archean upper crust are significantly lower than the post-Archean upper crust.

  19. Measuring consistent masses for 25 Milky Way globular clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimmig, Brian; Seth, Anil; Ivans, Inese I.

    2015-02-01

    We present central velocity dispersions, masses, mass-to-light ratios (M/Ls ), and rotation strengths for 25 Galactic globular clusters (GCs). We derive radial velocities of 1951 stars in 12 GCs from single order spectra taken with Hectochelle on the MMT telescope. To this sample we add an analysis of available archival data of individual stars. For the full set of data we fit King models to derive consistent dynamical parameters for the clusters. We find good agreement between single-mass King models and the observed radial dispersion profiles. The large, uniform sample of dynamical masses we derive enables us to examine trendsmore » of M/L with cluster mass and metallicity. The overall values of M/L and the trends with mass and metallicity are consistent with existing measurements from a large sample of M31 clusters. This includes a clear trend of increasing M/L with cluster mass and lower than expected M/Ls for the metal-rich clusters. We find no clear trend of increasing rotation with increasing cluster metallicity suggested in previous work.« less

  20. Simultaneous Identification of Multiple Driver Pathways in Cancer

    PubMed Central

    Leiserson, Mark D. M.; Blokh, Dima

    2013-01-01

    Distinguishing the somatic mutations responsible for cancer (driver mutations) from random, passenger mutations is a key challenge in cancer genomics. Driver mutations generally target cellular signaling and regulatory pathways consisting of multiple genes. This heterogeneity complicates the identification of driver mutations by their recurrence across samples, as different combinations of mutations in driver pathways are observed in different samples. We introduce the Multi-Dendrix algorithm for the simultaneous identification of multiple driver pathways de novo in somatic mutation data from a cohort of cancer samples. The algorithm relies on two combinatorial properties of mutations in a driver pathway: high coverage and mutual exclusivity. We derive an integer linear program that finds set of mutations exhibiting these properties. We apply Multi-Dendrix to somatic mutations from glioblastoma, breast cancer, and lung cancer samples. Multi-Dendrix identifies sets of mutations in genes that overlap with known pathways – including Rb, p53, PI(3)K, and cell cycle pathways – and also novel sets of mutually exclusive mutations, including mutations in several transcription factors or other genes involved in transcriptional regulation. These sets are discovered directly from mutation data with no prior knowledge of pathways or gene interactions. We show that Multi-Dendrix outperforms other algorithms for identifying combinations of mutations and is also orders of magnitude faster on genome-scale data. Software available at: http://compbio.cs.brown.edu/software. PMID:23717195

  1. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  2. The topology of large-scale structure. III - Analysis of observations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  3. The topology of large-scale structure. III - Analysis of observations. [in universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  4. MASS CALIBRATION AND COSMOLOGICAL ANALYSIS OF THE SPT-SZ GALAXY CLUSTER SAMPLE USING VELOCITY DISPERSION σ {sub v} AND X-RAY Y {sub X} MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bocquet, S.; Saro, A.; Mohr, J. J.

    2015-02-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg{sup 2} of the survey along with 63 velocity dispersion (σ {sub v}) and 16 X-ray Y {sub X} measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ {sub v} and Y {sub X} are consistent at the 0.6σ level, with the σ {sub v} calibration preferring ∼16% highermore » masses. We use the full SPT{sub CL} data set (SZ clusters+σ {sub v}+Y {sub X}) to measure σ{sub 8}(Ω{sub m}/0.27){sup 0.3} = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m {sub ν} = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m {sub ν} further reconciles the results. When we combine the SPT{sub CL} and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y {sub X} calibration and 0.8σ higher than the σ {sub v} calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω{sub m} = 0.299 ± 0.009 and σ{sub 8} = 0.829 ± 0.011. Within a νCDM model we find ∑m {sub ν} = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1)« less

  5. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification

    PubMed Central

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-01-01

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination. PMID:24957028

  6. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification.

    PubMed

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-05-09

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination.

  7. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  8. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  9. Effective Identification of Low-Gliadin Wheat Lines by Near Infrared Spectroscopy (NIRS): Implications for the Development and Analysis of Foodstuffs Suitable for Celiac Patients.

    PubMed

    García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco

    2016-01-01

    The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400-2500 nm) and ranges of 400-780 nm, 800-1098 nm and 1100-2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence.

  10. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots

    PubMed Central

    Gilbert, Hunter B.; Webster, Robert J.

    2016-01-01

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473

  11. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    PubMed

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  12. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B.

    The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.

  14. [Determination of chlorogenic acid, caffeic acid and linarin in Flos Chrysanthemi Indici from different places by RP-hPLC].

    PubMed

    Guo, Qiaosheng; Fang, Hailing; Shen, Haijin

    2010-05-01

    To evaluate the quality of Flos Chrysanthemi Indici which produced in twenty-two different producing places. Chlorogenic acid and caffeic acid were analyzed on a Shim-pack C8 colunm (4.6 mm x 250 mm, 5 microm) eluted with the mobile phase consisted of acetonitrile-0.5% phosphoric acid( 19:81). The detection wavelength was set at 326 nm. Linarin were eluted with the mobile phase consisted of methanol-water-acetic acid(26: 23: 1). The detection wavelength was set at 334 nm. The column temperature was 25 degrees C. The flow rate was 1.0 mL x min . The linear response ranged within 2.5-50 microg for chlorogenic acid (r = 0.998), 2.5-25 microg for caffeic acid (r = 0.998) and 4.97-41.47 microg for linarin (r = 0.999), respectively. Recoveries were 100.8% with RSD 2.1% for chlorogenic acid, 96.2% with RSD 2.3% for caffeic acid and 103.7% with RSD 1.8% for linarin. There was a significant difference in the content of chlorogenic acid, caffeic acid, linarin among the samples. The content of chlorogenic in the sample from Fengdou Chongqing city was the highest in those from other places. The content of caffeic acid in the all samples is very low. The content of linarin in the samples from Jiangsu province and Anhui province almost reached the national standard in pharmacopoeia.

  15. Final Report of Outcome of Southeastern New Mexico Bilingual Program.

    ERIC Educational Resources Information Center

    McCracken, Wanda

    The Southeastern New Mexico Bilingual Program's final report analyzed performance objectives to determine the outcome of the goals set for academic growth in the standard curriculum, as well as in the English and Spanish language arts, and growth in social development of students. The random sample consisted of 20 third and fourth graders from the…

  16. School-Based Speech-Language Pathologists' Use of iPads

    ERIC Educational Resources Information Center

    Romane, Garvin Philippe

    2017-01-01

    This study explored school-based speech-language pathologists' (SLPs') use of iPads and apps for speech and language instruction, specifically for articulation, language, and vocabulary goals. A mostly quantitative-based survey was administered to approximately 2,800 SLPs in a K-12 setting; the final sample consisted of 189 licensed SLPs. Overall,…

  17. Children's Perceptions of Illness and Health: An Analysis of Drawings

    ERIC Educational Resources Information Center

    Mouratidi, Paraskevi-Stavroula; Bonoti, Fotini; Leondari, Angeliki

    2016-01-01

    Objective: The purpose of this study was to explore possible age differences in children's perceptions of illness and health and to what extent these differ from adults' perceptions. Design: Cross-sectional design. Setting: Selected nursery and primary schools in Greece. Method: The sample consisted of 347 children aged 5-11 years and 114…

  18. Prevalence of Trauma, PTSD, and Dissociation in Court-Referred Adolescents

    ERIC Educational Resources Information Center

    Brosky, Beverly A.; Lally, Stephen J.

    2004-01-01

    This study examines the prevalence of trauma, posttraumatic stress disorder (PTSD), and dissociative symptoms in adolescents. The sample consisted of 76 females and 76 males, between the ages of 12 and 18, referred to the Child Guidance Clinic of the Superior Court of the District of Columbia for a psychological evaluation. Two sets of analyses…

  19. Why Is There a Disequilibrium between Power and Trust in Educational Settings?

    ERIC Educational Resources Information Center

    Levent, Faruk; Özdemir, Nehir; Akpolat, Tuba

    2018-01-01

    The purpose of this paper is to explore the relationship between school administrators' power sources and teachers' organizational trust levels according to the teachers' perceptions. The sample of the study, which employed a survey research method, consisted of 401 school teachers, working in both the private and public sectors in Istanbul,…

  20. Back to the Basics: Socio-Economic, Gender, and Regional Disparities in Canada's Educational System

    ERIC Educational Resources Information Center

    Edgerton, Jason D.; Peter, Tracey; Roberts, Lance W.

    2008-01-01

    This study reassessed the extent to which socio-economic background, gender, and region endure as sources of educational inequality in Canada. The analysis utilized the 28,000 student Canadian sample from the data set of the OECD's 2003 "Programme for International Student Assessment (PISA)". Results, consistent with previous findings,…

  1. Factors Influencing the Second Language Acquisition of Spanish Vibrants

    ERIC Educational Resources Information Center

    Hurtado, Luz Marcela; Estrada, Chelsea

    2010-01-01

    This article examines the role of linguistic and sociolinguistic factors in the second language (L2) acquisition of Spanish vibrants. The data consist of 2 sets of recordings from 37 students enrolled in a Spanish pronunciation class. The statistical program VarbRul was used to analyze 7,597 samples. The vibration (simple or multiple) and the…

  2. Moored rainfall measurements during COARE

    NASA Technical Reports Server (NTRS)

    Mcphaden, Michael J.

    1994-01-01

    This presentation discusses mini-ORG rainfall estimates collected from an array of six moornings in the western equatorial Pacific during the TOGA-COARE experiment. The moorings were clustered in the vicinity of the COARE intensive flux array (IFA) centered near 2 deg S, 156 deg E. The basic data set consisted of hourly means computed from 5-second samples.

  3. Pre-Vocational Immersion as Risk Intervention in a Mainstream Setting: A Preliminary Evaluation of Project OASES.

    ERIC Educational Resources Information Center

    Morrow, Daniel Hibbs; And Others

    This study evaluates a 1-year, pre-vocational intervention--Project OASES (Occupational and Academic Skills for the Employment of Students)--for at-risk middle school students in the Pittsburgh (Pennsylvania) Public School District. The study sample consisted of 502 former participants and 148 active participants (1988-1989 school year) plus…

  4. Attitudes of Greek Physical Education Teachers toward Inclusion of Students with Disabilities

    ERIC Educational Resources Information Center

    Papadopoulou, Dionisia; Kokaridas, Dimitrios; Papanikolaou, Zisis; Patsiaouras, Asterios

    2004-01-01

    The purpose of this study was to examine the attitudes of Greek physical education teachers toward the inclusion of students with disabilities in regular education settings and to compare the results with the findings of similar studies. The sample consisted of 93 participants, all physical education teachers working at different schools of…

  5. Leptospira species in floodwater during the 2011 floods in the Bangkok Metropolitan Region, Thailand.

    PubMed

    Thaipadungpanit, Janjira; Wuthiekanun, Vanaporn; Chantratita, Narisara; Yimsamran, Surapon; Amornchai, Premjit; Boonsilp, Siriphan; Maneeboonyang, Wanchai; Tharnpoophasiam, Prapin; Saiprom, Natnaree; Mahakunkijcharoen, Yuvadee; Day, Nicholas P J; Singhasivanon, Pratap; Peacock, Sharon J; Limmathurotsakul, Direk

    2013-10-01

    Floodwater samples (N = 110) collected during the 2011 Bangkok floods were tested for Leptospira using culture and polymerase chain reaction (PCR); 65 samples were PCR-positive for putatively non-pathogenic Leptospira species, 1 sample contained a putatively pathogenic Leptospira, and 6 samples contained Leptospira clustering phylogenetically with the intermediate group. The low prevalence of pathogenic and intermediate Leptospira in floodwater was consistent with the low number of human leptospirosis cases reported to the Bureau of Epidemiology in Thailand. This study provides baseline information on environmental Leptospira in Bangkok together with a set of laboratory tests that could be readily deployed in the event of future flooding.

  6. Atmospheric Carbon Dioxide Mixing Ratios from the NOAA CMDL Carbon Cycle Cooperative Global Air Sampling Network (2009)

    DOE Data Explorer

    Conway, Thomas [NOAA Climate Monitoring and Diagnostics Laboratory, Boulder, CO (USA); Tans, Pieter [NOAA Climate Monitoring and Diagnostics Laboratory, Boulder, CO (USA)

    2009-01-01

    The National Oceanic and Atmospheric Administration's Climate Monitoring and Diagnostics Laboratory (NOAA/CMDL) has measured CO2 in air samples collected weekly at a global network of sites since the late 1960s. Atmospheric CO2 mixing ratios reported in these files were measured by a nondispersive infrared absorption technique in air samples collected in glass flasks. All CMDL flask samples are measured relative to standards traceable to the World Meteorological Organization (WMO) CO2 mole fraction scale. These measurements constitute the most geographically extensive, carefully calibrated, internally consistent atmospheric CO2 data set available and are essential for studies aimed at better understanding the global carbon cycle budget.

  7. Psychometrics of the preschool behavioral and emotional rating scale with children from early childhood special education settings.

    PubMed

    Lambert, Matthew C; Cress, Cynthia J; Epstein, Michael H

    2015-01-01

    In a previous study with a nationally representative sample, researchers found that the items of the Preschool Behavioral and Emotional Rating Scale can best be described by a four-factor structure model (Emotional Regulation, School Readiness, Social Confidence, and Family Involvement). The findings of this investigation replicate and extend these previous results with a national sample of children (N = 1,075) with disabilities enrolled in early childhood special education programs. Data were analyzed using classical tests theory, Rasch modeling, and confirmatory factor analysis. Results confirmed that for the most part, individual items were internally consistent within a four-factor model and showed consistent item difficulty, discrimination, and fit relative to their respective subscale scores. © 2015 Michigan Association for Infant Mental Health.

  8. Effects of Php Gene-Associated versus Induced Resistance to Tobacco Cyst Nematode in Flue-Cured Tobacco

    PubMed Central

    Johnson, Charles S.; Eisenback, Jon D.

    2009-01-01

    Effects of the systemic acquired resistance (SAR)-inducing compound acibenzolar-S-methyl (ASM) and the plant-growth promoting rhizobacterial mixture Bacillus subtilis A13 and B. amyloliquefaciens IN937a (GB99+GB122) were assessed on the reproduction of a tobacco cyst nematode (TCN- Globodera tabacum solanacearum) under greenhouse conditions. Two sets of two independent experiments were conducted, each involving soil or root sampling. Soil sample experiments included flue-cured tobacco cultivars with (Php+: NC71 and NC102) and without (Php-: K326 and K346) a gene (Php) suppressing TCN parasitism. Root sample experiments examined TCN root parasitism of NC71 and K326. Cultivars possessing the Php gene (Php+) were compared with Php- cultivars to assess the effects of resistance mediated via Php gene vs. induced resistance to TCN. GB99+GB122 consistently reduced nematode reproductive ratio on both Php+ and Php- cultivars, but similar effects of ASM across Php- cultivars were less consistent. In addition, ASM application resulted in leaf yellowing and reduced root weight. GB99+GB122 consistently reduced nematode development in roots of both Php+ and Php- cultivars, while similar effects of ASM were frequently less consistent. The results of this study indicate that GB99+GB122 consistently reduced TCN reproduction in all flue-cured tobacco cultivars tested, while the effects of ASM were only consistent in Php+ cultivars. Under most circumstances, GB99+GB122 suppressed nematode reproduction more consistently than ASM compared to the untreated control. PMID:22736824

  9. Time-domain terahertz spectroscopy of artificial skin

    NASA Astrophysics Data System (ADS)

    Corridon, Peter M.; Ascázubi, Ricardo; Krest, Courtney; Wilke, Ingrid

    2006-02-01

    Time-domain Terahertz (THz) spectroscopy and imaging is currently evaluated as a novel tool for medical imaging and diagnostics. The application of THz-pulse imaging of human skin tissues and related cancers has been demonstrated recently in-vitro and in-vivo. With this in mind, we present a time-domain THz-transmission study of artificial skin. The skin samples consist of a monolayer of porous matrix of fibers of cross-linked bovine tendon collagen and a glycosaminoglycan (chondroitin-6-sulfate) that is manufactured with a controlled porosity and defined degradation rate. Another set of samples consists of the collagen monolayer covered with a silicone layer. We have measured the THz-transmission and determined the index of refraction and absorption of our samples between 0.1 and 3 THz for various states of hydration in distilled water and saline solutions. The transmission of the THz-radiation through the artificial skin samples is modeled by electromagnetic wave theory. Moreover, the THz-optical properties of the artificial skin layers are compared to the THz-optical properties of freshly excised human skin samples. Based on this comparison the potential use of artificial skin samples as photo-medical phantoms for human skin is discussed.

  10. The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating

    NASA Astrophysics Data System (ADS)

    Vischia, Pietro; Dorigo, Tommaso

    2017-03-01

    For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).

  11. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    PubMed

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  12. Automation practices in large molecule bioanalysis: recommendations from group L5 of the global bioanalytical consortium.

    PubMed

    Ahene, Ago; Calonder, Claudio; Davis, Scott; Kowalchick, Joseph; Nakamura, Takahiro; Nouri, Parya; Vostiar, Igor; Wang, Yang; Wang, Jin

    2014-01-01

    In recent years, the use of automated sample handling instrumentation has come to the forefront of bioanalytical analysis in order to ensure greater assay consistency and throughput. Since robotic systems are becoming part of everyday analytical procedures, the need for consistent guidance across the pharmaceutical industry has become increasingly important. Pre-existing regulations do not go into sufficient detail in regard to how to handle the use of robotic systems for use with analytical methods, especially large molecule bioanalysis. As a result, Global Bioanalytical Consortium (GBC) Group L5 has put forth specific recommendations for the validation, qualification, and use of robotic systems as part of large molecule bioanalytical analyses in the present white paper. The guidelines presented can be followed to ensure that there is a consistent, transparent methodology that will ensure that robotic systems can be effectively used and documented in a regulated bioanalytical laboratory setting. This will allow for consistent use of robotic sample handling instrumentation as part of large molecule bioanalysis across the globe.

  13. Type 2 Diabetes Screening Test by Means of a Pulse Oximeter.

    PubMed

    Moreno, Enrique Monte; Lujan, Maria Jose Anyo; Rusinol, Montse Torrres; Fernandez, Paqui Juarez; Manrique, Pilar Nunez; Trivino, Cristina Aragon; Miquel, Magda Pedrosa; Rodriguez, Marife Alvarez; Burguillos, M Jose Gonzalez

    2017-02-01

    In this paper, we propose a method for screening for the presence of type 2 diabetes by means of the signal obtained from a pulse oximeter. The screening system consists of two parts: the first analyzes the signal obtained from the pulse oximeter, and the second consists of a machine-learning module. The system consists of a front end that extracts a set of features form the pulse oximeter signal. These features are based on physiological considerations. The set of features were the input of a machine-learning algorithm that determined the class of the input sample, i.e., whether the subject had diabetes or not. The machine-learning algorithms were random forests, gradient boosting, and linear discriminant analysis as benchmark. The system was tested on a database of [Formula: see text] subjects (two samples per subject) collected from five community health centers. The mean receiver operating characteristic area found was [Formula: see text]% (median value [Formula: see text]% and range [Formula: see text]%), with a specificity =  [Formula: see text]% for a threshold that gave a sensitivity = [Formula: see text]%. We present a screening method for detecting diabetes that has a performance comparable to the glycated haemoglobin (haemoglobin A1c HbA1c) test, does not require blood extraction, and yields results in less than 5 min.

  14. Inference and quantification of peptidoforms in large sample cohorts by SWATH-MS

    PubMed Central

    Röst, Hannes L; Ludwig, Christina; Buil, Alfonso; Bensimon, Ariel; Soste, Martin; Spector, Tim D; Dermitzakis, Emmanouil T; Collins, Ben C; Malmström, Lars; Aebersold, Ruedi

    2017-01-01

    The consistent detection and quantification of protein post-translational modifications (PTMs) across sample cohorts is an essential prerequisite for the functional analysis of biological processes. Data-independent acquisition (DIA), a bottom-up mass spectrometry based proteomic strategy, exemplified by SWATH-MS, provides complete precursor and fragment ion information of a sample and thus, in principle, the information to identify peptidoforms, the modified variants of a peptide. However, due to the convoluted structure of DIA data sets the confident and systematic identification and quantification of peptidoforms has remained challenging. Here we present IPF (Inference of PeptidoForms), a fully automated algorithm that uses spectral libraries to query, validate and quantify peptidoforms in DIA data sets. The method was developed on data acquired by SWATH-MS and benchmarked using a synthetic phosphopeptide reference data set and phosphopeptide-enriched samples. The data indicate that IPF reduced false site-localization by more than 7-fold in comparison to previous approaches, while recovering 85.4% of the true signals. IPF was applied to detect and quantify peptidoforms carrying ten different types of PTMs in DIA data acquired from more than 200 samples of undepleted blood plasma of a human twin cohort. The data approportioned, for the first time, the contribution of heritable, environmental and longitudinal effects on the observed quantitative variability of specific modifications in blood plasma of a human population. PMID:28604659

  15. Comparison of electrofishing techniques to detect larval lampreys in wadeable streams in the Pacific Northwest

    USGS Publications Warehouse

    Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.

    2013-01-01

    We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.

  16. Standardizing electrofishing power for boat electrofishing: chapter 14

    USGS Publications Warehouse

    Miranda, L.E. (Steve); Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    Standardizing boat electrofishing entails achieving an accepted level of collection consistency by managing various brand factors, including (1) the temporal and spatial distribution of sampling effort, (2) boat operation, (3) equipment configuration, (4) characteristics of the waveform and energized field, and (5) power transferred to fish. This chapter focuses exclusively on factor 5:L factors 1-4 have been addressed in earlier chapters. Additionally, while the concepts covered in this chapter address boat electrofishing in general, the power settings discussed were developed from tests with primarily warmwater fish communities. Others (see Chapter 9) recommend lower power settings for communities consisting of primarily coldwater fishes. For reviews of basic concepts of electricity, electrofishing theory and systems, fish behavior relative to diverse waveforms, and injury matter, the reader is referred to Novotny (1990), Reynold (1996), and Snyder (2003).

  17. Evaluation of a Serum Lung Cancer Biomarker Panel.

    PubMed

    Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria

    2018-01-01

    A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results.

  18. Evaluation of a Serum Lung Cancer Biomarker Panel

    PubMed Central

    Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria

    2018-01-01

    Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783

  19. Combining Satellite and in Situ Data with Models to Support Climate Data Records in Ocean Biology

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2011-01-01

    The satellite ocean color data record spans multiple decades and, like most long-term satellite observations of the Earth, comes from many sensors. Unfortunately, global and regional chlorophyll estimates from the overlapping missions show substantial biases, limiting their use in combination to construct consistent data records. SeaWiFS and MODIS-Aqua differed by 13% globally in overlapping time segments, 2003-2007. For perspective, the maximum change in annual means over the entire Sea WiFS mission era was about 3%, and this included an El NinoLa Nina transition. These discrepancies lead to different estimates of trends depending upon whether one uses SeaWiFS alone for the 1998-2007 (no significant change), or whether MODIS is substituted for the 2003-2007 period (18% decline, P less than 0.05). Understanding the effects of climate change on the global oceans is difficult if different satellite data sets cannot be brought into conformity. The differences arise from two causes: 1) different sensors see chlorophyll differently, and 2) different sensors see different chlorophyll. In the first case, differences in sensor band locations, bandwidths, sensitivity, and time of observation lead to different estimates of chlorophyll even from the same location and day. In the second, differences in orbit and sensitivities to aerosols lead to sampling differences. A new approach to ocean color using in situ data from the public archives forces different satellite data to agree to within interannual variability. The global difference between Sea WiFS and MODIS is 0.6% for 2003-2007 using this approach. It also produces a trend using the combination of SeaWiFS and MODIS that agrees with SeaWiFS alone for 1998-2007. This is a major step to reducing errors produced by the first cause, sensor-related discrepancies. For differences that arise from sampling, data assimilation is applied. The underlying geographically complete fields derived from a free-running model is unaffected by solar zenith angle requirements and obscuration from clouds and aerosols. Combined with in situ dataenhanced satellite data, the model is forced into consistency using data assimilation. This approach eliminates sampling discrepancies from satellites. Combining the reduced differences of satellite data sets using in situ data, and the removal of sampling biases using data assimilation, we generate consistent data records of ocean color. These data records can support investigations of long-term effects of climate change on ocean biology over multiple satellites, and can improve the consistency of future satellite data sets.

  20. On the existence, uniqueness, and asymptotic normality of a consistent solution of the likelihood equations for nonidentically distributed observations: Applications to missing data problems

    NASA Technical Reports Server (NTRS)

    Peters, C. (Principal Investigator)

    1980-01-01

    A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.

  1. Standard Specimen Reference Set: Pancreatic — EDRN Public Portal

    Cancer.gov

    The primary objective of the EDRN Pancreatic Cancer Working Group Proposal is to create a reference set consisting of well-characterized serum/plasma specimens to use as a resource for the development of biomarkers for the early detection of pancreatic adenocarcinoma. The testing of biomarkers on the same sample set permits direct comparison among them; thereby, allowing the development of a biomarker panel that can be evaluated in a future validation study. Additionally, the establishment of an infrastructure with core data elements and standardized operating procedures for specimen collection, processing and storage, will provide the necessary preparatory platform for larger validation studies when the appropriate marker/panel for pancreatic adenocarcinoma has been identified.

  2. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGES

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  3. Phylogenetics of moth-like butterflies (Papilionoidea: Hedylidae) based on a new 13-locus target capture probe set.

    PubMed

    Kawahara, Akito Y; Breinholt, Jesse W; Espeland, Marianne; Storer, Caroline; Plotkin, David; Dexter, Kelly M; Toussaint, Emmanuel F A; St Laurent, Ryan A; Brehm, Gunnar; Vargas, Sergio; Forero, Dimitri; Pierce, Naomi E; Lohman, David J

    2018-06-11

    The Neotropical moth-like butterflies (Hedylidae) are perhaps the most unusual butterfly family. In addition to being species-poor, this family is predominantly nocturnal and has anti-bat ultrasound hearing organs. Evolutionary relationships among the 36 described species are largely unexplored. A new, target capture, anchored hybrid enrichment probe set ('BUTTERFLY2.0') was developed to infer relationships of hedylids and some of their butterfly relatives. The probe set includes 13 genes that have historically been used in butterfly phylogenetics. Our dataset comprised of up to 10,898 aligned base pairs from 22 hedylid species and 19 outgroups. Eleven of the thirteen loci were successfully captured from all samples, and the remaining loci were captured from ≥94% of samples. The inferred phylogeny was consistent with recent molecular studies by placing Hedylidae sister to Hesperiidae, and the tree had robust support for 80% of nodes. Our results are also consistent with morphological studies, with Macrosoma tipulata as the sister species to all remaining hedylids, followed by M. semiermis sister to the remaining species in the genus. We tested the hypothesis that nocturnality evolved once from diurnality in Hedylidae, and demonstrate that the ancestral condition was likely diurnal, with a shift to nocturnality early in the diversification of this family. The BUTTERFLY2.0 probe set includes standard butterfly phylogenetics markers, captures sequences from decades-old museum specimens, and is a cost-effective technique to infer phylogenetic relationships of the butterfly tree of life. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Moderate resolution spectrophotometry of high redshift quasars

    NASA Technical Reports Server (NTRS)

    Schneider, Donald P.; Schmidt, Maarten; Gunn, James E.

    1991-01-01

    A uniform set of photometry and high signal-to-noise moderate resolution spectroscopy of 33 quasars with redshifts larger than 3.1 is presented. The sample consists of 17 newly discovered quasars (two with redshifts in excess of 4.4) and 16 sources drawn from the literature. The objects in this sample have r magnitudes between 17.4 and 21.4; their luminosities range from -28.8 to -24.9. Three of the 33 objects are broad absorption line quasars. A number of possible high redshift damped Ly-alpha systems were found.

  5. Application of the BMWP-Costa Rica biotic index in aquatic biomonitoring: sensitivity to collection method and sampling intensity.

    PubMed

    Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M

    2014-04-01

    The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.

  6. Benchmarking contactless acquisition sensor reproducibility for latent fingerprint trace evidence

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Dittmann, Jana

    2015-03-01

    Optical, nano-meter range, contactless, non-destructive sensor devices are promising acquisition techniques in crime scene trace forensics, e.g. for digitizing latent fingerprint traces. Before new approaches are introduced in crime investigations, innovations need to be positively tested and quality ensured. In this paper we investigate sensor reproducibility by studying different scans from four sensors: two chromatic white light sensors (CWL600/CWL1mm), one confocal laser scanning microscope, and one NIR/VIS/UV reflection spectrometer. Firstly, we perform an intra-sensor reproducibility testing for CWL600 with a privacy conform test set of artificial-sweat printed, computer generated fingerprints. We use 24 different fingerprint patterns as original samples (printing samples/templates) for printing with artificial sweat (physical trace samples) and their acquisition with contactless sensory resulting in 96 sensor images, called scan or acquired samples. The second test set for inter-sensor reproducibility assessment consists of the first three patterns from the first test set, acquired in two consecutive scans using each device. We suggest using a simple feature space set in spatial and frequency domain known from signal processing and test its suitability for six different classifiers classifying scan data into small differences (reproducible) and large differences (non-reproducible). Furthermore, we suggest comparing the classification results with biometric verification scores (calculated with NBIS, with threshold of 40) as biometric reproducibility score. The Bagging classifier is nearly for all cases the most reliable classifier in our experiments and the results are also confirmed with the biometric matching rates.

  7. Spanish Adaptation and Validation of the Outcome Questionnaire OQ-30.2

    PubMed Central

    Errázuriz, Paula; Opazo, Sebastián; Behn, Alex; Silva, Oscar; Gloger, Sergio

    2017-01-01

    This study assessed the psychometric properties of a Spanish version of the Shortened Outcome Questionnaire (OQ-30.2, Lambert et al., 2004) validated with a sample of 546 patients in an outpatient mental health clinic and 100 non-clinical adults in Chile. Our results show that this measure has similar normative data to the original measure, with a cutoff score for the Chilean population set at 43.36, and the reliable change index at 14. This Spanish OQ-30.2 has good internal consistency (α = 0.90), has concurrent validity with the Depressive, Anxious, and Somatoform disorders measuring scale (Alvarado and Vera, 1991), and is sensitive to change during psychotherapy. Consistent with previous studies, factorial analyses showed that both, the one-factor solution for a general scale and the three-factor solution containing three theoretical scales yielded poor fit estimates. Overall, our results are similar to past research on the OQ-45 and the OQ-30. The short version has adequate psychometric properties, comparable to those of the OQ-45, but provides a gain in application time that could be relevant in the setting of psychotherapy research with large samples, frequent assessments over time, and/or samples that may require more assistance completing items (e.g., low-literacy). We conclude that this measure will be a valuable instrument for research and clinical practice. PMID:28559857

  8. Integrative missing value estimation for microarray data.

    PubMed

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  9. NIR spectroscopic measurement of moisture content in Scots pine seeds.

    PubMed

    Lestander, Torbjörn A; Geladi, Paul

    2003-04-01

    When tree seeds are used for seedling production it is important that they are of high quality in order to be viable. One of the factors influencing viability is moisture content and an ideal quality control system should be able to measure this factor quickly for each seed. Seed moisture content within the range 3-34% was determined by near-infrared (NIR) spectroscopy on Scots pine (Pinus sylvestris L.) single seeds and on bulk seed samples consisting of 40-50 seeds. The models for predicting water content from the spectra were made by partial least squares (PLS) and ordinary least squares (OLS) regression. Different conditions were simulated involving both using less wavelengths and going from samples to single seeds. Reflectance and transmission measurements were used. Different spectral pretreatment methods were tested on the spectra. Including bias, the lowest prediction errors for PLS models based on reflectance within 780-2280 nm from bulk samples and single seeds were 0.8% and 1.9%, respectively. Reduction of the single seed reflectance spectrum to 850-1048 nm gave higher biases and prediction errors in the test set. In transmission (850-1048 nm) the prediction error was 2.7% for single seeds. OLS models based on simulated 4-sensor single seed system consisting of optical filters with Gaussian transmission indicated more than 3.4% error in prediction. A practical F-test based on test sets to differentiate models is introduced.

  10. Assessing the Consistency and Microbiological Effectiveness of Household Water Treatment Practices by Urban and Rural Populations Claiming to Treat Their Water at Home: A Case Study in Peru

    PubMed Central

    Rosa, Ghislaine; Huaylinos, Maria L.; Gil, Ana; Lanata, Claudio; Clasen, Thomas

    2014-01-01

    Background Household water treatment (HWT) can improve drinking water quality and prevent disease if used correctly and consistently by vulnerable populations. Over 1.1 billion people report treating their water prior to drinking it. These estimates, however, are based on responses to household surveys that may exaggerate the consistency and microbiological performance of the practice—key factors for reducing pathogen exposure and achieving health benefits. The objective of this study was to examine how HWT practices are actually performed by households identified as HWT users, according to international monitoring standards. Methods and Findings We conducted a 6-month case study in urban (n = 117 households) and rural (n = 115 households) Peru, a country in which 82.8% of households report treating their water at home. We used direct observation, in-depth interviews, surveys, spot-checks, and water sampling to assess water treatment practices among households that claimed to treat their drinking water at home. While consistency of reported practices was high in both urban (94.8%) and rural (85.3%) settings, availability of treated water (based on self-report) at time of collection was low, with 67.1% and 23.0% of urban and rural households having treated water at all three sampling visits. Self-reported consumption of untreated water in the home among adults and children <5 was common and this was corroborated during home observations. Drinking water of self-reported users was significantly better than source water in the urban setting and negligible but significantly better in the rural setting. However, only 46.3% and 31.6% of households had drinking water <1 CFU/100 mL at all follow-up visits. Conclusions Our results raise questions about the usefulness of current international monitoring of HWT practices and their usefulness as a proxy indicator for drinking water quality. The lack of consistency and sub-optimal microbiological effectiveness also raises questions about the potential of HWT to prevent waterborne diseases. PMID:25522371

  11. Assessing the consistency and microbiological effectiveness of household water treatment practices by urban and rural populations claiming to treat their water at home: a case study in Peru.

    PubMed

    Rosa, Ghislaine; Huaylinos, Maria L; Gil, Ana; Lanata, Claudio; Clasen, Thomas

    2014-01-01

    Household water treatment (HWT) can improve drinking water quality and prevent disease if used correctly and consistently by vulnerable populations. Over 1.1 billion people report treating their water prior to drinking it. These estimates, however, are based on responses to household surveys that may exaggerate the consistency and microbiological performance of the practice-key factors for reducing pathogen exposure and achieving health benefits. The objective of this study was to examine how HWT practices are actually performed by households identified as HWT users, according to international monitoring standards. We conducted a 6-month case study in urban (n = 117 households) and rural (n = 115 households) Peru, a country in which 82.8% of households report treating their water at home. We used direct observation, in-depth interviews, surveys, spot-checks, and water sampling to assess water treatment practices among households that claimed to treat their drinking water at home. While consistency of reported practices was high in both urban (94.8%) and rural (85.3%) settings, availability of treated water (based on self-report) at time of collection was low, with 67.1% and 23.0% of urban and rural households having treated water at all three sampling visits. Self-reported consumption of untreated water in the home among adults and children <5 was common and this was corroborated during home observations. Drinking water of self-reported users was significantly better than source water in the urban setting and negligible but significantly better in the rural setting. However, only 46.3% and 31.6% of households had drinking water <1 CFU/100 mL at all follow-up visits. Our results raise questions about the usefulness of current international monitoring of HWT practices and their usefulness as a proxy indicator for drinking water quality. The lack of consistency and sub-optimal microbiological effectiveness also raises questions about the potential of HWT to prevent waterborne diseases.

  12. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  13. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  14. Immersion Education Outcomes and the Gaelic Community: Identities and Language Ideologies among Gaelic Medium-Educated Adults in Scotland

    ERIC Educational Resources Information Center

    Dunmore, Stuart S.

    2017-01-01

    Scholars have consistently theorised that language ideologies can influence the ways in which bilingual speakers in minority language settings identify and engage with the linguistic varieties available to them. Research conducted by the author examined the interplay of language use and ideologies among a purposive sample of adults who started in…

  15. Predicting Meaningful Employment for Refugees: The Influence of Personal Characteristics and Developmental Factors on Employment Status and Hourly Wages

    ERIC Educational Resources Information Center

    Codell, Jonathan D.; Hill, Robert D.; Woltz, Dan J.; Gore, Paul A.

    2011-01-01

    Refugee demographic and developmental variables were evaluated as predictors of employment outcomes following a six-month non-governmental organization (NGO) directed resettlement period. The sample consisted of 85 refugee adults (18 to 54 years) who were resettling in a medium sized urban setting in the western United States. Demographics…

  16. The Structure of the Atom: Teacher's Guide Levels A, B, and C. Preliminary Limited Edition.

    ERIC Educational Resources Information Center

    Cambridge Physics Outlet, Woburn, MA. Education Programs Dept.

    This is a two-part curriculum package for teaching the structure of atoms. The first part--the Teacher's Guide--contains information necessary for using the equipment in a typical classroom including learning goals, vocabulary, math skills, and sample data for each activity. The second part of the package consists of photocopy masters for a set of…

  17. Multiple Risks, Emotion Regulation Skill, and Cortisol in Low-Income African American Youth: A Prospective Study

    ERIC Educational Resources Information Center

    Kliewer, Wendy; Reid-Quinones, Kathryn; Shields, Brian J.; Foutz, Lauren

    2009-01-01

    Associations between multiple risks, emotion regulation skill, and basal cortisol levels were examined in a community sample of 69 African American youth (mean age = 11.30 years; 49% male) living in an urban setting. Multiple risks were assessed at Time 1 and consisted of 10 demographic and psychosocial risk factors including parent, child, and…

  18. Maids or Mentors? The Effects of Live-In Foreign Domestic Workers on Children's Educational Achievement in Hong Kong

    ERIC Educational Resources Information Center

    Tang, Sam Hak Kan; Yung, Linda Chor Wing

    2016-01-01

    This paper studies the effects of live-in foreign domestic workers (FDWs) on school children's educational outcomes using samples from two population censuses and a survey data set. The evidence consistently points to Filipino FDWs improving the educational outcomes of school children by decreasing their probability of late schooling or increasing…

  19. ADULT EDUCATION AND THE ADOPTION OF INNOVATIONS BY ORCHARDISTS IN THE OKANAGAN VALLEY OF BRITISH COLUMBIA.

    ERIC Educational Resources Information Center

    MILLERD, FRANK W.; VERNER, COOLIE

    THIS STUDY ANALYZED THE GENERAL BEHAVIOR OF ORCHARDISTS IN THE OKANAGAN VALLEY, BRITISH COLUMBIA, AND THE FACTORS RELATED TO ADOPTION OF INNOVATIONS IN THIS SETTING. FIVE PERCENT SAMPLES WERE DRAWN FROM 19 DISTRICTS CONSISTING OF 2,721 ORCHARDS, AND DATA WERE GATHERED BY RESIDENT AGRICULTURISTS. THE DATA WERE ANALYZED BY STAGE IN THE ADOPTION…

  20. The Contributions of Phonological and Morphological Awareness to Literacy Skills in the Adult Basic Education Population

    ERIC Educational Resources Information Center

    Fracasso, Lucille E.; Bangs, Kathryn; Binder, Katherine S.

    2016-01-01

    The Adult Basic Education (ABE) population consists of a wide range of abilities with needs that may be unique to this set of learners. The purpose of this study was to better understand the relative contributions of phonological decoding and morphological awareness to spelling, vocabulary, and comprehension across a sample of ABE students. In…

  1. Polarization Imaging Apparatus with Auto-Calibration

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)

    2013-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.

  2. Polarization imaging apparatus with auto-calibration

    DOEpatents

    Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui

    2013-08-20

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.

  3. [Tobacco quality analysis of industrial classification of different years using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2012-11-01

    In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.

  4. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  5. Polarization imaging apparatus

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin Kevin (Inventor); Chen, Qiushui (Inventor); Zhao, Hongzhi (Inventor)

    2010-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set 11, a linear polarizer 14 with its optical axis 18, a first variable phase retarder 12 with its optical axis 16 aligned 22.5.degree. to axis 18, a second variable phase retarder 13 with its optical axis 17 aligned 45.degree. to axis 18, a imaging sensor 15 for sensing the intensity images of the sample, a controller 101 and a computer 102. Two variable phase retarders 12 and 13 were controlled independently by a computer 102 through a controller unit 101 which generates a sequential of voltages to control the phase retardations of VPRs 12 and 13. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor 15 when the phase retardations of VPRs 12 and 13 were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.

  6. A sampling approach for predicting the eating quality of apples using visible-near infrared spectroscopy.

    PubMed

    Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B

    2013-12-01

    Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.

  7. Semantic transparency in free stems: The effect of Orthography-Semantics Consistency on word recognition.

    PubMed

    Marelli, Marco; Amenta, Simona; Crepaldi, Davide

    2015-01-01

    A largely overlooked side effect in most studies of morphological priming is a consistent main effect of semantic transparency across priming conditions. That is, participants are faster at recognizing stems from transparent sets (e.g., farm) in comparison to stems from opaque sets (e.g., fruit), regardless of the preceding primes. This suggests that semantic transparency may also be consistently associated with some property of the stem word. We propose that this property might be traced back to the consistency, throughout the lexicon, between the orthographic form of a word and its meaning, here named Orthography-Semantics Consistency (OSC), and that an imbalance in OSC scores might explain the "stem transparency" effect. We exploited distributional semantic models to quantitatively characterize OSC, and tested its effect on visual word identification relying on large-scale data taken from the British Lexicon Project (BLP). Results indicated that (a) the "stem transparency" effect is solid and reliable, insofar as it holds in BLP lexical decision times (Experiment 1); (b) an imbalance in terms of OSC can account for it (Experiment 2); and (c) more generally, OSC explains variance in a large item sample from the BLP, proving to be an effective predictor in visual word access (Experiment 3).

  8. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  9. Effective Identification of Low-Gliadin Wheat Lines by Near Infrared Spectroscopy (NIRS): Implications for the Development and Analysis of Foodstuffs Suitable for Celiac Patients

    PubMed Central

    García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco

    2016-01-01

    Scope The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Methods and Results Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400–2500 nm) and ranges of 400–780 nm, 800–1098 nm and 1100–2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. Conclusions The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence. PMID:27018786

  10. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  11. Watershed-based survey designs

    USGS Publications Warehouse

    Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.

    2005-01-01

    Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.

  12. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  13. Universal sensor interface module (USIM)

    NASA Astrophysics Data System (ADS)

    King, Don; Torres, A.; Wynn, John

    1999-01-01

    A universal sensor interface model (USIM) is being developed by the Raytheon-TI Systems Company for use with fields of unattended distributed sensors. In its production configuration, the USIM will be a multichip module consisting of a set of common modules. The common module USIM set consists of (1) a sensor adapter interface (SAI) module, (2) digital signal processor (DSP) and associated memory module, and (3) a RF transceiver model. The multispectral sensor interface is designed around a low-power A/D converted, whose input/output interface consists of: -8 buffered, sampled inputs from various devices including environmental, acoustic seismic and magnetic sensors. The eight sensor inputs are each high-impedance, low- capacitance, differential amplifiers. The inputs are ideally suited for interface with discrete or MEMS sensors, since the differential input will allow direct connection with high-impedance bridge sensors and capacitance voltage sources. Each amplifier is connected to a 22-bit (Delta) (Sigma) A/D converter to enable simultaneous samples. The low power (Delta) (Sigma) converter provides 22-bit resolution at sample frequencies up to 142 hertz (used for magnetic sensors) and 16-bit resolution at frequencies up to 1168 hertz (used for acoustic and seismic sensors). The video interface module is based around the TMS320C5410 DSP. It can provide sensor array addressing, video data input, data calibration and correction. The processor module is based upon a MPC555. It will be used for mode control, synchronization of complex sensors, sensor signal processing, array processing, target classification and tracking. Many functions of the A/D, DSP and transceiver can be powered down by using variable clock speeds under software command or chip power switches. They can be returned to intermediate or full operation by DSP command. Power management may be based on the USIM's internal timer, command from the USIM transceiver, or by sleep mode processing management. The low power detection mode is implemented by monitoring any of the sensor analog outputs at lower sample rates for detection over a software controllable threshold.

  14. Gene expression pattern recognition algorithm inferences to classify samples exposed to chemical agents

    NASA Astrophysics Data System (ADS)

    Bushel, Pierre R.; Bennett, Lee; Hamadeh, Hisham; Green, James; Ableson, Alan; Misener, Steve; Paules, Richard; Afshari, Cynthia

    2002-06-01

    We present an analysis of pattern recognition procedures used to predict the classes of samples exposed to pharmacologic agents by comparing gene expression patterns from samples treated with two classes of compounds. Rat liver mRNA samples following exposure for 24 hours with phenobarbital or peroxisome proliferators were analyzed using a 1700 rat cDNA microarray platform. Sets of genes that were consistently differentially expressed in the rat liver samples following treatment were stored in the MicroArray Project System (MAPS) database. MAPS identified 238 genes in common that possessed a low probability (P < 0.01) of being randomly detected as differentially expressed at the 95% confidence level. Hierarchical cluster analysis on the 238 genes clustered specific gene expression profiles that separated samples based on exposure to a particular class of compound.

  15. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  16. An early-biomarker algorithm predicts lethal graft-versus-host disease and survival

    PubMed Central

    Hartwell, Matthew J.; Özbek, Umut; Holler, Ernst; Major-Monfried, Hannah; Reddy, Pavan; Aziz, Mina; Hogan, William J.; Ayuk, Francis; Efebera, Yvonne A.; Hexner, Elizabeth O.; Bunworasate, Udomsak; Qayed, Muna; Ordemann, Rainer; Wölfl, Matthias; Mielke, Stephan; Chen, Yi-Bin; Devine, Steven; Jagasia, Madan; Kitko, Carrie L.; Litzow, Mark R.; Kröger, Nicolaus; Locatelli, Franco; Morales, George; Nakamura, Ryotaro; Reshef, Ran; Rösler, Wolf; Weber, Daniela; Yanik, Gregory A.; Levine, John E.; Ferrara, James L.M.

    2017-01-01

    BACKGROUND. No laboratory test can predict the risk of nonrelapse mortality (NRM) or severe graft-versus-host disease (GVHD) after hematopoietic cellular transplantation (HCT) prior to the onset of GVHD symptoms. METHODS. Patient blood samples on day 7 after HCT were obtained from a multicenter set of 1,287 patients, and 620 samples were assigned to a training set. We measured the concentrations of 4 GVHD biomarkers (ST2, REG3α, TNFR1, and IL-2Rα) and used them to model 6-month NRM using rigorous cross-validation strategies to identify the best algorithm that defined 2 distinct risk groups. We then applied the final algorithm in an independent test set (n = 309) and validation set (n = 358). RESULTS. A 2-biomarker model using ST2 and REG3α concentrations identified patients with a cumulative incidence of 6-month NRM of 28% in the high-risk group and 7% in the low-risk group (P < 0.001). The algorithm performed equally well in the test set (33% vs. 7%, P < 0.001) and the multicenter validation set (26% vs. 10%, P < 0.001). Sixteen percent, 17%, and 20% of patients were at high risk in the training, test, and validation sets, respectively. GVHD-related mortality was greater in high-risk patients (18% vs. 4%, P < 0.001), as was severe gastrointestinal GVHD (17% vs. 8%, P < 0.001). The same algorithm can be successfully adapted to define 3 distinct risk groups at GVHD onset. CONCLUSION. A biomarker algorithm based on a blood sample taken 7 days after HCT can consistently identify a group of patients at high risk for lethal GVHD and NRM. FUNDING. The National Cancer Institute, American Cancer Society, and the Doris Duke Charitable Foundation. PMID:28194439

  17. An exploratory study of a text classification framework for Internet-based surveillance of emerging epidemics

    PubMed Central

    Torii, Manabu; Yin, Lanlan; Nguyen, Thang; Mazumdar, Chand T.; Liu, Hongfang; Hartley, David M.; Nelson, Noele P.

    2014-01-01

    Purpose Early detection of infectious disease outbreaks is crucial to protecting the public health of a society. Online news articles provide timely information on disease outbreaks worldwide. In this study, we investigated automated detection of articles relevant to disease outbreaks using machine learning classifiers. In a real-life setting, it is expensive to prepare a training data set for classifiers, which usually consists of manually labeled relevant and irrelevant articles. To mitigate this challenge, we examined the use of randomly sampled unlabeled articles as well as labeled relevant articles. Methods Naïve Bayes and Support Vector Machine (SVM) classifiers were trained on 149 relevant and 149 or more randomly sampled unlabeled articles. Diverse classifiers were trained by varying the number of sampled unlabeled articles and also the number of word features. The trained classifiers were applied to 15 thousand articles published over 15 days. Top-ranked articles from each classifier were pooled and the resulting set of 1337 articles was reviewed by an expert analyst to evaluate the classifiers. Results Daily averages of areas under ROC curves (AUCs) over the 15-day evaluation period were 0.841 and 0.836, respectively, for the naïve Bayes and SVM classifier. We referenced a database of disease outbreak reports to confirm that this evaluation data set resulted from the pooling method indeed covered incidents recorded in the database during the evaluation period. Conclusions The proposed text classification framework utilizing randomly sampled unlabeled articles can facilitate a cost-effective approach to training machine learning classifiers in a real-life Internet-based biosurveillance project. We plan to examine this framework further using larger data sets and using articles in non-English languages. PMID:21134784

  18. Association analysis of the beta-3 adrenergic receptor Trp64Arg (rs4994) polymorphism with urate and gout.

    PubMed

    Fatima, Tahzeeb; Altaf, Sara; Phipps-Green, Amanda; Topless, Ruth; Flynn, Tanya J; Stamp, Lisa K; Dalbeth, Nicola; Merriman, Tony R

    2016-02-01

    The Arg64 allele of variant rs4994 (Trp64Arg) in the β3-adrenergic receptor gene has been associated with increased serum urate and risk of gout. Our objective was to investigate the relationship of rs4994 with serum urate and gout in New Zealand European, Māori and Pacific subjects. A total of 1730 clinically ascertained gout cases and 2145 controls were genotyped for rs4994 by Taqman(®). Māori and Pacific subjects were subdivided into Eastern Polynesian (EP) and Western Polynesian (WP) sample sets. Publicly available genotype data from the Atherosclerosis Risk in Communities Study and the Framingham Heart Study were utilized for serum urate association analysis. Multivariate logistic and linear regression adjusted for potential confounders was carried out using R version 2.15.2. No significant association of the minor Arg64 (G) allele of rs4994 with gout was found in the combined Polynesian cohorts (OR = 0.98, P = 0.88), although there was evidence, after adjustment for renal disease, for association in both the WP (OR = 0.53, P = 0.03) and the lower Polynesian ancestry EP sample sets (OR = 1.86, P = 0.05). There was no evidence for association with gout in the European sample set (OR = 1.11, P = 0.57). However, the Arg64 allele was positively associated with urate in the WP data set (β = 0.036, P = 0.004, P Corrected = 0.032). Association of the Arg64 variant with increased urate in the WP sample set was consistent with the previous literature, although the protective effect of this variant with gout in WP was inconsistent. This association provides an etiological link between metabolic syndrome components and urate homeostasis.

  19. The clinical nurse specialist in an Irish hospital.

    PubMed

    Wickham, Sheelagh

    2011-01-01

    This study was set in an acute Irish health care setting and aimed to explore the activity of the clinical nurse specialist (CNS) in this setting. Quantitative methodology, using a valid and reliable questionnaire, provided descriptive statistics that gave accurate data on the total population of CNSs in the health care setting. The study was set in an acute-care 750-bed hospital that had 25 CNSs in practice. The sample consisted of all 25 CNSs who are the total population of CNSs working in the acute health care institution. The findings show the CNS to be active in the roles of researcher, educator, communicator, change agent, leader, and clinical specialist, but the level of activity varies between different roles. There is variety in the activity of CNSs in the various roles and to what extent they enact the role. The findings merit further study on CNS role activity and possible variables that influence role activity.

  20. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. Inmore » fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method compared favorably to that of classification and regression tree (CART) and k nearest neighbor (KNN) algorithms, with the best combination of accuracy and robustness, as tested by classifying samples measured independently in our laboratories against the vendor QC based reference set.« less

  1. The Influence of Plasma-Based Nitriding and Oxidizing Treatments on the Mechanical and Corrosion Properties of CoCrMo Biomedical Alloy

    NASA Astrophysics Data System (ADS)

    Noli, Fotini; Pichon, Luc; Öztürk, Orhan

    2018-04-01

    Plasma-based nitriding and/or oxidizing treatments were applied to CoCrMo alloy to improve its surface mechanical properties and corrosion resistance for biomedical applications. Three treatments were performed. A set of CoCrMo samples has been subjected to nitriding at moderate temperatures ( 400 °C). A second set of CoCrMo samples was oxidized at 395 °C in pure O2. The last set of CoCrMo samples was nitrided and subsequently oxidized under the experimental conditions of previous sets (double treatment). The microstructure and morphology of the layers formed on the CoCrMo alloy were investigated by X-ray diffraction, Atomic Force Microscopy, and Scanning Electron Microscopy. In addition, nitrogen and oxygen profiles were determined by Glow Discharge Optical Emission Spectroscopy, Rutherford Backscattering Spectroscopy, Energy-Dispersive X-ray, and Nuclear Reaction Analysis. Significant improvement of the Vickers hardness of the CoCrMo samples after plasma nitriding was observed due to the supersaturated nitrogen solution and the formation of an expanded FCC γ N phase and CrN precipitates. In the case of the oxidized samples, Vickers hardness improvement was minimal. The corrosion behavior of the samples was investigated in simulated body fluid (0.9 pct NaCl solution at 37 °C) using electrochemical techniques (potentiodynamic polarization and cyclic voltammetry). The concentration of metal ions released from the CoCrMo surfaces was determined by Instrumental Neutron Activation Analysis. The experimental results clearly indicate that the CoCrMo surface subjected to the double surface treatment consisting in plasma nitriding and plasma oxidizing exhibited lower deterioration and better resistance to corrosion compared to the nitrided, oxidized, and untreated samples. This enhancement is believed to be due to the formation of a thicker and more stable layer.

  2. Trace and major element pollution originating from coal ash suspension and transport processes.

    PubMed

    Popovic, A; Djordjevic, D; Polic, P

    2001-04-01

    Coal ash obtained by coal combustion in the "Nikola Tesla A" power plant in Obrenovac, near Belgrade, Yugoslavia, is mixed with water of the Sava river and transported to the dump. In order to assess pollution caused by leaching of some minor and major elements during ash transport through the pipeline, two sets of samples (six samples each) were subjected to a modified sequential extraction. The first set consisted of coal ash samples taken immediately after combustion, while the second set was obtained by extraction with river water, imitating the processes that occur in the pipeline. Samples were extracted consecutively with distilled water and a 1 M solution of KCl, pH 7, and the differences in extractability were compared in order to predict potential pollution. Considering concentrations of seven trace elements as well as five major elements in extracts from a total of 12 samples, it can be concluded that lead and cadmium do not present an environmental threat during and immediately after ash transport to the dump. Portions of zinc, nickel and chromium are released during the ash transport, and arsenic and manganese are released continuously. Copper and iron do not present an environmental threat due to element leaching during and immediately after the coal ash suspension and transport. On the contrary, these elements, as well as chromium, become concentrated during coal ash transport. Adsorbed portions of calcium, magnesium and potassium are also leached during coal ash transport.

  3. Petrographic and Vitrinite Reflectance Analyses of a Suite of High Volatile Bituminous Coal Samples from the United States and Venezuela

    USGS Publications Warehouse

    Hackley, Paul C.; Kolak, Jonathan J.

    2008-01-01

    This report presents vitrinite reflectance and detailed organic composition data for nine high volatile bituminous coal samples. These samples were selected to provide a single, internally consistent set of reflectance and composition analyses to facilitate the study of linkages among coal composition, bitumen generation during thermal maturation, and geochemical characteristics of generated hydrocarbons. Understanding these linkages is important for addressing several issues, including: the role of coal as a source rock within a petroleum system, the potential for conversion of coal resources to liquid hydrocarbon fuels, and the interactions between coal and carbon dioxide during enhanced coalbed methane recovery and(or) carbon dioxide sequestration in coal beds.

  4. Of Small Beauties and Large Beasts: The Quality of Distractors on Multiple-Choice Tests Is More Important than Their Quantity

    ERIC Educational Resources Information Center

    Papenberg, Martin; Musch, Jochen

    2017-01-01

    In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…

  5. Searching for the Exit in a Maze? Or Setting Sail for New Horizons? Metaphors by Twelfth Grade Students for Learning Mathematics

    ERIC Educational Resources Information Center

    Guner, Necdet

    2013-01-01

    This study examines and classifies the metaphors that twelfth grade students formulated to describe the concept of "learning mathematics". The sample of the study consists of 669 twelfth grade students (317 female, 352 male) of two Anatolian and two vocational high schools located in the city center of Denizli. The following questions…

  6. A Preliminary Version of a Scale to Measure Sex-Role Attitudes in the Army. Research Memorandum 76-3.

    ERIC Educational Resources Information Center

    Woelfel, John C.; And Others

    To measure the sex role attitudes of Army personnel, an initial set of 174 items was developed. These items were administered to 721 soldiers at three Army installations; the sample consisted of 540 men and 181 women--401 of these were officers and 320 were enlisted personnel. Factor analysis of these 174 items indicated one strong…

  7. The Effects of Project Based Learning on Undergraduate Students' Achievement and Self-Efficacy Beliefs towards Science Teaching

    ERIC Educational Resources Information Center

    Bilgin, Ibrahim; Karakuyu, Yunus; Ay, Yusuf

    2015-01-01

    The purpose of this study is to investigate the effects of the Project-Based Learning (PBL) method on undergraduate students' achievement and its association with these students' self-efficacy beliefs about science teaching and pinions about PBL. The sample of the study consisted of two randomly chosen classes from a set of seven classes enrolled…

  8. Negative Affect, Delinquency, and Alcohol Use among Rural and Urban African-American Adolescents: A Brief Report

    ERIC Educational Resources Information Center

    Taylor, Matthew J.; Merritt, Stephanie M.; Austin, Chammie C.

    2013-01-01

    A model of negative affect and alcohol use was replicated on a sample of African-American high school students. Participants (N = 5,086) were randomly selected from a previously collected data set and consisted of 2,253 males and 2,833 females residing in both rural and urban locations. Multivariate analysis of covariance and structural equation…

  9. Addressing fluorogenic real-time qPCR inhibition using the novel custom Excel file system 'FocusField2-6GallupqPCRSet-upTool-001' to attain consistently high fidelity qPCR reactions

    PubMed Central

    Ackermann, Mark R.

    2006-01-01

    The purpose of this manuscript is to discuss fluorogenic real-time quantitative polymerase chain reaction (qPCR) inhibition and to introduce/define a novel Microsoft Excel-based file system which provides a way to detect and avoid inhibition, and enables investigators to consistently design dynamically-sound, truly LOG-linear qPCR reactions very quickly. The qPCR problems this invention solves are universal to all qPCR reactions, and it performs all necessary qPCR set-up calculations in about 52 seconds (using a pentium 4 processor) for up to seven qPCR targets and seventy-two samples at a time – calculations that commonly take capable investigators days to finish. We have named this custom Excel-based file system "FocusField2-6GallupqPCRSet-upTool-001" (FF2-6-001 qPCR set-up tool), and are in the process of transforming it into professional qPCR set-up software to be made available in 2007. The current prototype is already fully functional. PMID:17033699

  10. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  11. Demonstrating the efficacy of the FoneAstra pasteurization monitor for human milk pasteurization in resource-limited settings.

    PubMed

    Naicker, Mageshree; Coutsoudis, Anna; Israel-Ballard, Kiersten; Chaudhri, Rohit; Perin, Noah; Mlisana, Koleka

    2015-03-01

    Human milk provides crucial nutrition and immunologic protection for infants. When a mother's own milk is unavailable, donated human milk, pasteurized to destroy bacteria and viruses, is a lifesaving replacement. Flash-heat pasteurization is a simple, low-cost, and commonly used method to make milk safe, but currently there is no system to monitor milk temperature, which challenges quality control. FoneAstra, a smartphone-based mobile pasteurization monitor, removes this barrier by guiding users through pasteurization and documenting consistent and safe practice. This study evaluated FoneAstra's efficacy as a quality control system, particularly in resource-limited settings, by comparing bacterial growth in donor milk flash-heated with and without the device at a neonatal intensive care unit in Durban, South Africa. For 100 samples of donor milk, one aliquot each of prepasteurized milk, milk flash-heated without FoneAstra, and milk pasteurized with FoneAstra was cultured on routine agar for bacterial growth. Isolated bacteria were identified and enumerated. In total, 300 samples (three from each donor sample) were analyzed. Bacterial growth was found in 86 of the 100 samples before any pasteurization and one of the 100 postpasteurized samples without FoneAstra. None of the samples pasteurized using FoneAstra showed bacterial growth. Both pasteurization methods were safe and effective. FoneAstra, however, provides the additional benefits of user-guided temperature monitoring and data tracking. By improving quality assurance and standardizing the pasteurization process, FoneAstra can support wide-scale implementation of human milk banks in resource-limited settings, increasing access and saving lives.

  12. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  13. Prediction of tautomer ratios by embedded-cluster integral equation theory

    NASA Astrophysics Data System (ADS)

    Kast, Stefan M.; Heil, Jochen; Güssregen, Stefan; Schmidt, K. Friedemann

    2010-04-01

    The "embedded cluster reference interaction site model" (EC-RISM) approach combines statistical-mechanical integral equation theory and quantum-chemical calculations for predicting thermodynamic data for chemical reactions in solution. The electronic structure of the solute is determined self-consistently with the structure of the solvent that is described by 3D RISM integral equation theory. The continuous solvent-site distribution is mapped onto a set of discrete background charges ("embedded cluster") that represent an additional contribution to the molecular Hamiltonian. The EC-RISM analysis of the SAMPL2 challenge set of tautomers proceeds in three stages. Firstly, the group of compounds for which quantitative experimental free energy data was provided was taken to determine appropriate levels of quantum-chemical theory for geometry optimization and free energy prediction. Secondly, the resulting workflow was applied to the full set, allowing for chemical interpretations of the results. Thirdly, disclosure of experimental data for parts of the compounds facilitated a detailed analysis of methodical issues and suggestions for future improvements of the model. Without specifically adjusting parameters, the EC-RISM model yields the smallest value of the root mean square error for the first set (0.6 kcal mol-1) as well as for the full set of quantitative reaction data (2.0 kcal mol-1) among the SAMPL2 participants.

  14. Beyond Picture Naming: Norms and Patient Data for a Verb Generation Task**

    PubMed Central

    Kurland, Jacquie; Reber, Alisson; Stokes, Polly

    2014-01-01

    Purpose The current study aimed to: 1) acquire a set of verb generation to picture norms; and 2) probe its utility as an outcomes measure in aphasia treatment. Method Fifty healthy volunteers participated in Phase I, the verb generation normative sample. They generated verbs for 218 pictures of common objects (ISI=5s). In Phase II, four persons with aphasia (PWA) generated verbs for 60 objects (ISI=10s). Their stimuli consisted of objects which were: 1) recently trained (for object naming; n=20); 2) untrained (a control set; n=20); or 3) from a set of pictures named correctly at baseline (n=20). Verb generation was acquired twice: two months into, and following, a six-month home practice program. Results No objects elicited perfect verb agreement in the normed sample. Stimuli with the highest percent agreement were mostly artifacts and dominant verbs primary functional associates. Although not targeted in treatment or home practice, PWA mostly improved performance in verb generation post-practice. Conclusions A set of clinically and experimentally useful verb generation norms was acquired for a subset of the Snodgrass and Vanderwart (1980) picture set. More cognitively demanding than confrontation naming, this task may help to fill the sizeable gap between object picture naming and propositional speech. PMID:24686752

  15. High-throughput quantitative analysis by desorption electrospray ionization mass spectrometry.

    PubMed

    Manicke, Nicholas E; Kistler, Thomas; Ifa, Demian R; Cooks, R Graham; Ouyang, Zheng

    2009-02-01

    A newly developed high-throughput desorption electrospray ionization (DESI) source was characterized in terms of its performance in quantitative analysis. A 96-sample array, containing pharmaceuticals in various matrices, was analyzed in a single run with a total analysis time of 3 min. These solution-phase samples were examined from a hydrophobic PTFE ink printed on glass. The quantitative accuracy, precision, and limit of detection (LOD) were characterized. Chemical background-free samples of propranolol (PRN) with PRN-d(7) as internal standard (IS) and carbamazepine (CBZ) with CBZ-d(10) as IS were examined. So were two other sample sets consisting of PRN/PRN-d(7) at varying concentration in a biological milieu of 10% urine or porcine brain total lipid extract, total lipid concentration 250 ng/microL. The background-free samples, examined in a total analysis time of 1.5 s/sample, showed good quantitative accuracy and precision, with a relative error (RE) and relative standard deviation (RSD) generally less than 3% and 5%, respectively. The samples in urine and the lipid extract required a longer analysis time (2.5 s/sample) and showed RSD values of around 10% for the samples in urine and 4% for the lipid extract samples and RE values of less than 3% for both sets. The LOD for PRN and CBZ when analyzed without chemical background was 10 and 30 fmol, respectively. The LOD of PRN increased to 400 fmol analyzed in 10% urine, and 200 fmol when analyzed in the brain lipid extract.

  16. Rational-emotive behavior therapy and the formation of stimulus equivalence classes.

    PubMed

    Plaud, J J; Gaither, G A; Weller, L A; Bigwood, S J; Barth, J; von Duvillard, S P

    1998-08-01

    Stimulus equivalence is a behavioral approach to analyzing the "meaning" of stimulus sets and has an implication for clinical psychology. The formation of three-member (A --> B --> C) stimulus equivalence classes was used to investigate the effects of three different sets of sample and comparison stimuli on emergent behavior. The three stimulus sets were composed of Rational-Emotive Behavior Therapy (REBT)-related words, non-REBT emotionally charged words, and a third category of neutral words composed of flower labels. Sixty-two women and men participated in a modified matching-to-sample experiment. Using a mixed cross-over design, and controlling for serial order effects, participants received conditional training and emergent relationship training in the three stimulus set conditions. Results revealed a significant interaction between the formation of stimulus equivalence classes and stimulus meaning, indicating consistently biased responding in favor of reaching criterion responding more slowly for REBT-related and non-REBT emotionally charged words. Results were examined in the context of an analysis of the importance of stimulus meaning on behavior and the relation of stimulus meaning to behavioral and cognitive theories, with special appraisal given to the influence of fear-related discriminative stimuli on behavior.

  17. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  18. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  19. Space and time aliasing structure is monthly mean polar-orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Zeng, Lixin; Levy, Gad

    1995-01-01

    Monthly mean wind fields from the European Remote Sensing Satellite (ERS1) scatterometer are presented. A banded structure which resembles the satellite subtrack is clearly and consistently apparent in the isotachs as well as the u and v components of the routinely produced fields. The structure also appears in the means of data from other polar-orbiting satellites and instruments. An experiment is designed to trace the cause of the banded structure. The European Centre for Medium-Range Weather Forecast (ECMWF) gridded surface wind analyses are used as a control set. These analyses are also sampled with the ERS1 temporal-spatial samplig pattern to form a simulated scatterometer wind set. Both sets are used to create monthly averages. The banded structures appear in the monthly mean simulated data but do not appear in the control set. It is concluded that the source of the banded structure lies in the spatial and temporal sampling of the polar-orbiting satellite which results in undersampling. The problem involves multiple timescales and space scales, oversampling and under-sampling in space, aliasing in the time and space domains, and preferentially sampled variability. It is shown that commonly used spatial smoothers (or filters), while producing visually pleasing results, also significantly bias the true mean. A three-dimensional spatial-temporal interpolator is designed and used to determine the mean field. It is found to produce satisfactory monthly means from both simulated and real ERS1 data. The implications to climate studies involving polar-orbiting satellite data are discussed.

  20. Abuse of reformulated OxyContin: Updated findings from a sentinel surveillance sample of individuals assessed for substance use disorder.

    PubMed

    Cassidy, Theresa A; Thorley, Eileen; Black, Ryan A; DeVeaugh-Geiss, Angela; Butler, Stephen F; Coplan, Paul

    To examine abuse prevalence for OxyContin and comparator opioids over a 6-year period prior to and following market entry of reformulated OxyContin and assess consistency in abuse across treatment settings and geographic regions. An observational study examining longitudinal changes using cross-sectional data from treatment centers for substance use disorder. A total of 874 facilities in 39 states in the United States within the National Addictions Vigilance Intervention and Prevention Program (NAVIPPRO®) surveillance system. Adults (72,060) assessed for drug problems using the Addiction Severity Index-Multimedia Version (ASI-MV®) from January 2009 through December 2015 who abused prescription opioids. Percent change in past 30-day abuse. OxyContin had significantly lower abuse 5 years after reformulation compared to levels for original OxyContin. Consistency of magnitude in OxyContin abuse reductions across geographic regions, ranging from 41 to 52 percent with differences in abuse reductions in treatment setting categories occurred. Changes in geographic region and treatment settings across study years did not bias the estimate of lower OxyContin abuse through confounding. In the postmarket setting, limitations and methodologic challenges in abuse measurement exist and it is difficult to isolate singular impacts of any one intervention given the complexity of prescription opioid abuse. Expectations for a reasonable threshold of abuse for any one ADF product or ADF opioids as a class are still uncertain and undefined. A significant decline in abuse prevalence of reformulated OxyContin was observed 5 years after its reformulation among this treatment sample of individuals assessed for substance use disorder that was lower historically for the original formulation of this product.

  1. The CO₂ GAP Project--CO₂ GAP as a prognostic tool in emergency departments.

    PubMed

    Shetty, Amith L; Lai, Kevin H; Byth, Karen

    2010-12-01

    To determine whether CO₂ GAP [(a-ET) PCO₂] value differs consistently in patients presenting with shortness of breath to the ED requiring ventilatory support. To determine a cut-off value of CO₂ GAP, which is consistently associated with measured outcome and to compare its performance against other derived variables. This prospective observational study was conducted in ED on a convenience sample of 412 from 759 patients who underwent concurrent arterial blood gas and ETCO₂ (end-tidal CO₂) measurement. They were randomized to test sample of 312 patients and validation set of 100 patients. The primary outcome of interest was the need for ventilatory support and secondary outcomes were admission to high dependency unit or death during stay in ED. The randomly selected training set was used to select cut-points for the possible predictors; that is, CO₂ GAP, CO₂ gradient, physiologic dead space and A-a gradient. The sensitivity, specificity and predictive values of these predictors were validated in the test set of 100 patients.   Analysis of the receiver operating characteristic curves revealed the CO₂ GAP performed significantly better than the arterial-alveolar gradient in patients requiring ventilator support (area under the curve 0.950 vs 0.726). A CO₂ GAP ≥10 was associated with assisted ventilation outcomes when applied to the validation test set (100% sensitivity 70% specificity). The CO₂ GAP [(a-ET) PCO₂] differs significantly in patients requiring assisted ventilation when presenting with shortness of breath to EDs and further research addressing the prognostic value of CO₂ GAP in this specific aspect is required. © 2010 The Authors. EMA © 2010 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  2. Teaching calculus using module based on cooperative learning strategy

    NASA Astrophysics Data System (ADS)

    Arbin, Norazman; Ghani, Sazelli Abdul; Hamzah, Firdaus Mohamad

    2014-06-01

    The purpose of the research is to evaluate the effectiveness of a module which utilizes the cooperative learning for teaching Calculus for limit, derivative and integral. The sample consists of 50 semester 1 students from the Science Programme (AT 16) Sultan Idris Education University. A set of questions of related topics (pre and post) has been used as an instrument to collect data. The data is analyzed using inferential statistics involving the paired sample t-test and the independent t-test. The result shows that students have positive inclination towards the modulein terms of understanding.

  3. Specific methodology for capacitance imaging by atomic force microscopy: A breakthrough towards an elimination of parasitic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal

    2014-02-24

    On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.

  4. Predicting Reactive Intermediate Quantum Yields from Dissolved Organic Matter Photolysis Using Optical Properties and Antioxidant Capacity.

    PubMed

    Mckay, Garrett; Huang, Wenxi; Romera-Castillo, Cristina; Crouch, Jenna E; Rosario-Ortiz, Fernando L; Jaffé, Rudolf

    2017-05-16

    The antioxidant capacity and formation of photochemically produced reactive intermediates (RI) was studied for water samples collected from the Florida Everglades with different spatial (marsh versus estuarine) and temporal (wet versus dry season) characteristics. Measured RI included triplet excited states of dissolved organic matter ( 3 DOM*), singlet oxygen ( 1 O 2 ), and the hydroxyl radical ( • OH). Single and multiple linear regression modeling were performed using a broad range of extrinsic (to predict RI formation rates, R RI ) and intrinsic (to predict RI quantum yields, Φ RI ) parameters. Multiple linear regression models consistently led to better predictions of R RI and Φ RI for our data set but poor prediction of Φ RI for a previously published data set,1 probably because the predictors are intercorrelated (Pearson's r > 0.5). Single linear regression models were built with data compiled from previously published studies (n ≈ 120) in which E2:E3, S, and Φ RI values were measured, which revealed a high degree of similarity between RI-optical property relationships across DOM samples of diverse sources. This study reveals that • OH formation is, in general, decoupled from 3 DOM* and 1 O 2 formation, providing supporting evidence that 3 DOM* is not a • OH precursor. Finally, Φ RI for 1 O 2 and 3 DOM* correlated negatively with antioxidant activity (a surrogate for electron donating capacity) for the collected samples, which is consistent with intramolecular oxidation of DOM moieties by 3 DOM*.

  5. The thought-action fusion scale: further evidence for its reliability and validity.

    PubMed

    Rassin, E; Merckelbach, H; Muris, P; Schmidt, H

    2001-05-01

    Thought-action fusion (TAF) refers to a set of cognitive biases that are thought to play a role in the development of obsessional phenomena. To measure these biases, R. Shafran, D. S. Thordarson, and S. Rachman (1996; Journal of Anxiety Disorders, 10, 379-391) developed the TAF-scale. They concluded that the TAF-scale possesses adequate psychometric qualities. The current study sought to further explore the reliability and validity of the TAF-scale. Results indicate that the TAF-scale has good internal consistency. TAF-scores correlated with self-reports of obsessional problems. Furthermore, mean scores in a mixed sample of anxiety disordered patients were higher than those in a normal sample. However, temporal consistency was somewhat disappointing. Also, the question remains whether TAF is specific to obsessive-compulsive disorder or taps more pervasive biases that play a role in a variety of disorders.

  6. Expression of Superparamagnetic Particles on FORC Diagrams

    NASA Astrophysics Data System (ADS)

    Hirt, A. M.; Kumari, M.; Crippa, F.; Petri-Fink, A.

    2015-12-01

    Identification of superparamagnetic (SP) particles in natural materials provides information on processes that lead to the new formation or dissolution of iron oxides. SP particles express themselves on first-order reversal curve (FORC) diagrams as a distribution centered near the origin of the diagram. Pike et al. (2001, GJI, 145, 721) demonstrated that thermal relaxation produces an upward shift in the FORC distribution, and attributed this to a pause encountered at each reversal field. In this study we examine the relationship between this upward shift and particles size on two sets of synthetic iron oxide nanoparticles. One set of coated magnetite particles have well-constrained particles size with 9, 16 and 20 nm as their diameter. A second set from the FeraSpin™ Series, consisting of FeraSpinXS, M and XL, were evaluated. Rock magnetic experiments indicate that the first set of samples is exclusively magnetite, whereas the FeraSpin samples contain predominantly magnetite with some degree of oxidation. Samples from both sets show that the upward shift of the FORC distribution at the origin increases with decreasing particle size. The amount of shift in the FeraSpin series is less when compared to the samples from the first set. This is attributed to the effect of interaction that counteracts the effect of thermal relaxation behavior of the SP particles. The FeraSpin series also shows a broader FORC distribution on the vertical axis that appears to be related to non-saturation of the hysteresis curve at maximum applied field. This non-saturation behavior can be due to spins of very fine particles or oxidation to hematite. AC susceptibility at low temperature indicates that particle interaction may affect the effective magnetic particle size. Our results suggest that the FORC distribution in pure SP particle systems provides information on the particle size distribution or oxidation, which can be further evaluated with low temperature techniques.

  7. Radioactivities induced in some LDEF samples

    NASA Technical Reports Server (NTRS)

    Reedy, Robert C.; Moss, Calvin E.

    1992-01-01

    Final activities are reported for gamma ray emitting isotopes measured in 35 samples from LDEF. In 26 steel trunnion samples, activities of Mn-54 and Co-57 were measured and limits set on other isotopes. In five Al end support retainer plates and two Al keel plate samples, Na-22 was measured. In two Ti clip samples, Na-22 was measured, limits for Sc-46 were obtained, and high activities for impurity Uranium and daughter isotopes were observed. Four sets of depth vs activity profiles were measured for the D sections of the trunnion. For all 4 profiles, the activities first decreased with increasing distance from the surface of the trunnion but were fairly flat near the center. These profiles are consistent with production by both the lower energy (approx. 100 MeV) trapped particles and high energy (approx. 10 GeV) galactic-cosmic ray particles. For the near surface samples, the earth quadrant had more Mn-54 than the space quadrant. For the D sections, there was less Mn-54 in the east trunnion than in the west trunnion. Comparisons are made among the samples and with activities measured by others. The limit for Sc-46 in the Ti clips is compared with the activities of Mn-54 produced in the steel pieces by similar reactions. Activities predicted by several models are compared with the measured activities.

  8. A method to eliminate the influence of incident light variations in spectral analysis

    NASA Astrophysics Data System (ADS)

    Luo, Yongshun; Li, Gang; Fu, Zhigang; Guan, Yang; Zhang, Shengzhao; Lin, Ling

    2018-06-01

    The intensity of the light source and consistency of the spectrum are the most important factors influencing the accuracy in quantitative spectrometric analysis. An efficient "measuring in layer" method was proposed in this paper to limit the influence of inconsistencies in the intensity and spectrum of the light source. In order to verify the effectiveness of this method, a light source with a variable intensity and spectrum was designed according to Planck's law and Wien's displacement law. Intra-lipid samples with 12 different concentrations were prepared and divided into modeling sets and prediction sets according to different incident lights and solution concentrations. The spectra of each sample were measured with five different light intensities. The experimental results showed that the proposed method was effective in eliminating the influence caused by incident light changes and was more effective than normalized processing.

  9. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.

  10. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  11. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  12. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  13. First finding of impact melt in the IIE Netschaëvo meteorite

    NASA Astrophysics Data System (ADS)

    Roosbroek, N.; Pittarello, L.; Greshake, A.; Debaille, V.; Claeys, P.

    2016-02-01

    About half of the IIE nonmagmatic iron meteorites contain silicate inclusions with a primitive to differentiated nature. The presence of preserved chondrules has been reported for two IIE meteorites so far, Netschaëvo and Mont Dieu, which represent the most primitive silicate material within this group. In this study, silicate inclusions from two samples of Netschaëvo were examined. Both silicate inclusions are characterized by a porphyritic texture dominated by clusters of coarse-grained olivine and pyroxene, set in a fine-grained groundmass that consists of new crystals of olivine and a glassy appearing matrix. This texture does not correspond to the description of the previously examined pieces of Netschaëvo, which consist of primitive chondrule-bearing angular clasts. Detailed petrographic observations and geochemical analyses suggest that the investigated samples of Netschaëvo consist of quenched impact melt. This implies that Netschaëvo is a breccia containing metamorphosed and impact-melt rock (IMR) clasts and that collisions played a major role in the formation of the IIE group.

  14. Correlation consistent basis sets for the atoms In–Xe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahler, Andrew; Wilson, Angela K., E-mail: akwilson@unt.edu

    In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.

  15. Visitor employed photography: its potential and use in evaluating visitors' perceptions of resource impacts in trail and park settings

    Treesearch

    Catherine E. Dorwart; Roger L. Moore; Yu-Fai Leung

    2007-01-01

    The purpose of this study was to examine visitors' perceptions and to determine how their perceptions affected overall recreation experiences along a 2.9-mile segment of the Appalachian Trail in the Great Smoky Mountains National Park. A purposive sample of 28 visitors was selected for this study. The study consisted of three parts, including a trail impact...

  16. [Generalization of money-handling though training in equivalence relationships].

    PubMed

    Vives-Montero, Carmen; Valero-Aguayo, Luis; Ascanio, Lourdes

    2011-02-01

    This research used a matching-to-sample procedure and equivalence learning process with language and verbal tasks. In the study, an application of the equivalence relationship of money was used with several kinds of euro coins presented. The sample consisted of 16 children (8 in the experimental group and 8 in the control group) aged 5 years. The prerequisite behaviors, the identification of coins and the practical use of different euro coins, were assessed in the pre and post phases for both groups. The children in the experimental group performed an equivalence task using the matching-to-sample procedure. This consisted of a stimulus sample and four matching stimuli, using a series of euro coins with equivalent value in each set. The children in the control group did not undergo this training process. The results showed a large variability in the children's data of the equivalence tests. The experimental group showed the greatest pre and post changes in the statistically significant data. They also showed a greater generalization in the identification of money and in the use of euro coins than the control group. The implications for educational training and the characteristics of the procedure used here for coin equivalence are discussed.

  17. The Social Interaction Anxiety Scale (SIAS) and the Social Phobia Scale (SPS): a comparison of two short-form versions.

    PubMed

    Fergus, Thomas A; Valentiner, David P; Kim, Hyun-Soo; McGrath, Patrick B

    2014-12-01

    The widespread use of Mattick and Clarke's (1998) Social Interaction Anxiety Scale (SIAS) and Social Phobia Scale (SPS) led 2 independent groups of researchers to develop short forms of these measures (Fergus, Valentiner, McGrath, Gier-Lonsway, & Kim, 2012; Peters, Sunderland, Andrews, Rapee, & Mattick, 2012). This 3-part study examined the psychometric properties of Fergus et al.'s and Peters et al.'s short forms of the SIAS and SPS using an American nonclinical adolescent sample in Study 1 (N = 98), American patient sample with an anxiety disorder in Study 2 (N = 117), and both a South Korean college student sample (N = 341) and an American college student sample (N = 550) in Study 3. Scores on both sets of short forms evidenced adequate internal consistency, interitem correlations, and measurement invariance. Scores on Fergus et al.'s short forms, particularly their SIAS short form, tended to capture more unique variance in scores of criterion measures than did scores on Peters et al.'s short forms. Implications for the use of these 2 sets of short forms are discussed. (c) 2014 APA, all rights reserved.

  18. Context effects in a temporal discrimination task" further tests of the Scalar Expectancy Theory and Learning-to-Time models.

    PubMed

    Arantes, Joana; Machado, Armando

    2008-07-01

    Pigeons were trained on two temporal bisection tasks, which alternated every two sessions. In the first task, they learned to choose a red key after a 1-s signal and a green key after a 4-s signal; in the second task, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. Then the pigeons were exposed to a series of test trials in order to contrast two timing models, Learning-to-Time (LeT) and Scalar Expectancy Theory (SET). The models made substantially different predictions particularly for the test trials in which the sample duration ranged from 1 s to 16 s and the choice keys were Green and Blue, the keys associated with the same 4-s samples: LeT predicted that preference for Green should increase with sample duration, a context effect, but SET predicted that preference for Green should not vary with sample duration. The results were consistent with LeT. The present study adds to the literature the finding that the context effect occurs even when the two basic discriminations are never combined in the same session.

  19. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples.

    PubMed

    Abma, Femke I; Bültmann, Ute; Amick Iii, Benjamin C; Arends, Iris; Dorland, Heleen F; Flach, Peter A; van der Klink, Jac J L; van de Ven, Hardy A; Bjørner, Jakob Bue

    2017-09-09

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons' health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with mixed clinical conditions and job types to evaluate the comparability of the scale structure. Methods Confirmatory factor and multi-group analyses were conducted in six cross-sectional working samples (total N = 2433) to evaluate and compare a five-factor model structure of the WRFQ (work scheduling demands, output demands, physical demands, mental and social demands, and flexibility demands). Model fit indices were calculated based on RMSEA ≤ 0.08 and CFI ≥ 0.95. After fitting the five-factor model, the multidimensional structure of the instrument was evaluated across samples using a second order factor model. Results The factor structure was robust across samples and a multi-group model had adequate fit (RMSEA = 0.63, CFI = 0.972). In sample specific analyses, minor modifications were necessary in three samples (final RMSEA 0.055-0.080, final CFI between 0.955 and 0.989). Applying the previous first order specifications, a second order factor model had adequate fit in all samples. Conclusion A five-factor model of the WRFQ showed consistent structural validity across samples. A second order factor model showed adequate fit, but the second order factor loadings varied across samples. Therefore subscale scores are recommended to compare across different clinical and working samples.

  20. Irisin and exercise training in humans - results from a randomized controlled training trial.

    PubMed

    Hecksteden, Anne; Wegmann, Melissa; Steffen, Anke; Kraushaar, Jochen; Morsch, Arne; Ruppenthal, Sandra; Kaestner, Lars; Meyer, Tim

    2013-11-05

    The recent discovery of a new myokine (irisin) potentially involved in health-related training effects has gained great attention, but evidence for a training-induced increase in irisin remains preliminary. Therefore, the present study aimed to determine whether irisin concentration is increased after regular exercise training in humans. In a randomized controlled design, two guideline conforming training interventions were studied. Inclusion criteria were age 30 to 60 years, <1 hour/week regular activity, non-smoker, and absence of major diseases. 102 participants could be included in the analysis. Subjects in the training groups exercised 3 times per week for 26 weeks. The minimum compliance was defined at 70%. Aerobic endurance training (AET) consisted of 45 minutes of walking/running at 60% heart rate reserve. Strength endurance training (SET) consisted of 8 machine-based exercises (2 sets of 15 repetitions with 100% of the 20 repetition maximum). Serum irisin concentrations in frozen serum samples were determined in a single blinded measurement immediately after the end of the training study. Physical performance provided positive control for the overall efficacy of training. Differences between groups were tested for significance using analysis of variance. For post hoc comparisons with the control group, Dunnett's test was used. Maximum performance increased significantly in the training groups compared with controls (controls: ±0.0 ± 0.7 km/h; AET: 1.1 ± 0.6 km/h, P < 0.01; SET: +0.5 ± 0.7 km/h, P = 0.01). Changes in irisin did not differ between groups (controls: 101 ± 81 ng/ml; AET: 44 ± 93 ng/ml; SET: 60 ± 92 ng/ml; in both cases: P = 0.99 (one-tailed testing), 1-β error probability = 0.7). The general upward trend was mainly accounted for by a negative association of irisin concentration with the storage duration of frozen serum samples (P < 0.01, β = -0.33). After arithmetically eliminating this confounder, the differences between groups remained non-significant. A training-induced increase in circulating irisin could not be confirmed, calling into question its proposed involvement in health-related training effects. Because frozen samples are prone to irisin degradation over time, positive results from uncontrolled trials might exclusively reflect the longer storage of samples from initial tests.

  1. Low-power, low-cost urinalysis system with integrated dipstick evaluation and microscopic analysis.

    PubMed

    Smith, Gennifer T; Li, Linkai; Zhu, Yue; Bowden, Audrey K

    2018-06-21

    We introduce a coupled dipstick and microscopy device for analyzing urine samples. The device is capable of accurately assessing urine dipstick results while simultaneously imaging the microscopic contents within the sample. We introduce a long working distance, cellphone-based microscope in combination with an oblique illumination scheme to accurately visualize and quantify particles within the urine sample. To facilitate accurate quantification, we couple the imaging set-up with a power-free filtration system. The proposed device is reusable, low-cost, and requires very little power. We show that results obtained with the proposed device and custom-built app are consistent with those obtained with the standard clinical protocol, suggesting the potential clinical utility of the device.

  2. Understanding proenvironmental intentions and behaviors: The importance of considering both the behavior setting and the type of behavior.

    PubMed

    Maki, Alexander; Rothman, Alexander J

    2017-01-01

    To better understand the consistency of people's proenvironmental intentions and behaviors, we set out to examine two sets of research questions. First, do people perform (1) different types of proenvironmental behaviors consistently, and (2) the same proenvironmental behavior consistently across settings? Second, are there consistent predictors of proenvironmental behavioral intentions across behavior and setting type? Participants reported four recycling and conservation behaviors across three settings, revealing significant variability in rates of behaviors across settings. Prior behavior, attitudes toward the behavior, and importance of the behaviour consistently predicted proenvironmental intentions. However, perceived behavioral control tended to predict intentions to perform proenvironmental behavior outside the home. Future research aimed at understanding and influencing different proenvironmental behaviors should carefully consider how settings affect intentions and behavior.

  3. Identifying Early Childhood Personality Dimensions Using the California Child Q-Set and Prospective Associations With Behavioral and Psychosocial Development.

    PubMed

    Wilson, Sylia; Schalet, Benjamin D; Hicks, Brian M; Zucker, Robert A

    2013-08-01

    The present study used an empirical, "bottom-up" approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood.

  4. Psychometric properties of the Depression Anxiety and Stress Scale-21 in older primary care patients.

    PubMed

    Gloster, Andrew T; Rhoades, Howard M; Novy, Diane; Klotsche, Jens; Senior, Ashley; Kunik, Mark; Wilson, Nancy; Stanley, Melinda A

    2008-10-01

    The Depression Anxiety Stress Scale (DASS) was designed to efficiently measure the core symptoms of anxiety and depression and has demonstrated positive psychometric properties in adult samples of anxiety and depression patients and student samples. Despite these findings, the psychometric properties of the DASS remain untested in older adults, for whom the identification of efficient measures of these constructs is especially important. To determine the psychometric properties of the DASS 21-item version in older adults, we analyzed data from 222 medical patients seeking treatment to manage worry. Consistent with younger samples, a three-factor structure best fit the data. Results also indicated good internal consistency, excellent convergent validity, and good discriminative validity, especially for the Depression scale. Receiver operating curve analyses indicated that the DASS-21 predicted the diagnostic presence of generalized anxiety disorder and depression as well as other commonly used measures. These data suggest that the DASS may be used with older adults in lieu of multiple scales designed to measure similar constructs, thereby reducing participant burden and facilitating assessment in settings with limited assessment resources.

  5. Hospital-based emergency nursing in rural settings.

    PubMed

    Brown, Jennifer F

    2008-01-01

    In 2006, the Institute of Medicine (IOM) released a series of reports that highlighted the urgent need for improvements in the nation's emergency health services. This news has provided new energy to a growing body of research about the development and implementation of best practices in emergency care. Despite evidence of geographical disparities in health services, relatively little attention has been focused on rural emergency services to identify environmental differences. The purpose of this chapter is to summarize the contributions of nursing research to the rural emergency services literature. The research resembles a so-called shotgun effect as the exploratory and interventional studies cover a wide range of topics without consistency or justification. Emergency nursing research has been conducted primarily in urban settings, with small samples and insufficient methodological rigor. This chapter will discuss the limitations of the research and set forth an agenda of critical topics that need to be explored related to emergency nursing in rural settings.

  6. The SPAI-18, a brief version of the social phobia and anxiety inventory: reliability and validity in clinically referred and non-referred samples.

    PubMed

    de Vente, Wieke; Majdandžić, Mirjana; Voncken, Marisol J; Beidel, Deborah C; Bögels, Susan M

    2014-03-01

    We developed a new version of the Social Phobia and Anxiety Inventory (SPAI) in order to have a brief instrument for measuring social anxiety and social anxiety disorder (SAD) with a strong conceptual foundation. In the construction phase, a set of items representing 5 core aspects of social anxiety was selected by a panel of social anxiety experts. The selected item pool was validated using factor analysis, reliability analysis, and diagnostic analysis in a sample of healthy participants (N = 188) and a sample of clinically referred participants diagnosed with SAD (N = 98). This procedure resulted in an abbreviated version of the Social Phobia Subscale of the SPAI consisting of 18 items (i.e. the SPAI-18), which correlated strongly with the Social Phobia Subscale of the original SPAI (both groups r = .98). Internal consistency and diagnostic characteristics using a clinical cut-off score > 48 were good to excellent (Cronbach's alpha healthy group = .93; patient group = .91; sensitivity: .94; specificity: .88). The SPAI-18 was further validated in a community sample of parents-to-be without SAD (N = 237) and with SAD (N = 65). Internal consistency was again excellent (both groups Cronbach's alpha = .93) and a screening cut-off of > 36 proved to result in good sensitivity and specificity. The SPAI-18 also correlated strongly with other social anxiety instruments, supporting convergent validity. In sum, the SPAI-18 is a psychometrically sound instrument with good screening capacity for social anxiety disorder in clinical as well as community samples. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Asthma and mental health among youth in high-risk service settings.

    PubMed

    Goodwin, Renee D; Hottinger, Kate; Pena, Lillian; Chacko, Anil; Feldman, Jonathan; Wamboldt, Marianne Z; Hoven, Christina

    2014-08-01

    To investigate the prevalence of asthma and mental health problems among representative samples of youth in high-risk service settings and the community, and to examine the relationship between asthma and mental health in these groups. Data were drawn from the Alternative Service Use Patterns of Youth with Serious Emotional Disturbance Study (SED) (n = 1181), a combined representative, cross-sectional sample of youth in various clinical settings and the community. Multiple logistic regression analyses were used to examine the association between asthma and mental disorders. Demographic characteristics were investigated as potential confounders. Asthma was common among 15.2% of youth in service settings and 18.8% of youth in the community. The prevalence of mental disorders was extremely high among youth with and without asthma in all service settings, and asthma was associated with increased prevalence of mental disorders among youth in the community, but not among youth in service settings. The relationship between asthma and internalizing disorders among youth in the community does not appear entirely attributable to confounding by demographics. Findings are consistent with and extend previous data by showing that both asthma and mental disorders are disproportionately common among youth in high-risk service settings. Almost half of youth with asthma in service settings meet diagnostic criteria for a mental disorder. Clinicians and policy makers who are responsible for the health care of youth in these high-risk groups should be aware that asthma is common, and that the prevalence of internalizing disorders are especially common among those with asthma.

  8. Demonstrating the Efficacy of the FoneAstra Pasteurization Monitor for Human Milk Pasteurization in Resource-Limited Settings

    PubMed Central

    Coutsoudis, Anna; Israel-Ballard, Kiersten; Chaudhri, Rohit; Perin, Noah; Mlisana, Koleka

    2015-01-01

    Abstract Human milk provides crucial nutrition and immunologic protection for infants. When a mother's own milk is unavailable, donated human milk, pasteurized to destroy bacteria and viruses, is a lifesaving replacement. Flash-heat pasteurization is a simple, low-cost, and commonly used method to make milk safe, but currently there is no system to monitor milk temperature, which challenges quality control. FoneAstra, a smartphone-based mobile pasteurization monitor, removes this barrier by guiding users through pasteurization and documenting consistent and safe practice. This study evaluated FoneAstra's efficacy as a quality control system, particularly in resource-limited settings, by comparing bacterial growth in donor milk flash-heated with and without the device at a neonatal intensive care unit in Durban, South Africa. Materials and Methods: For 100 samples of donor milk, one aliquot each of prepasteurized milk, milk flash-heated without FoneAstra, and milk pasteurized with FoneAstra was cultured on routine agar for bacterial growth. Isolated bacteria were identified and enumerated. Results: In total, 300 samples (three from each donor sample) were analyzed. Bacterial growth was found in 86 of the 100 samples before any pasteurization and one of the 100 postpasteurized samples without FoneAstra. None of the samples pasteurized using FoneAstra showed bacterial growth. Conclusions: Both pasteurization methods were safe and effective. FoneAstra, however, provides the additional benefits of user-guided temperature monitoring and data tracking. By improving quality assurance and standardizing the pasteurization process, FoneAstra can support wide-scale implementation of human milk banks in resource-limited settings, increasing access and saving lives. PMID:25668396

  9. A molecular identification system for grasses: a novel technology for forensic botany.

    PubMed

    Ward, J; Peakall, R; Gilmore, S R; Robertson, J

    2005-09-10

    Our present inability to rapidly, accurately and cost-effectively identify trace botanical evidence remains the major impediment to the routine application of forensic botany. Grasses are amongst the most likely plant species encountered as forensic trace evidence and have the potential to provide links between crime scenes and individuals or other vital crime scene information. We are designing a molecular DNA-based identification system for grasses consisting of several PCR assays that, like a traditional morphological taxonomic key, provide criteria that progressively identify an unknown grass sample to a given taxonomic rank. In a prior study of DNA sequences across 20 phylogenetically representative grass species, we identified a series of potentially informative indels in the grass mitochondrial genome. In this study we designed and tested five PCR assays spanning these indels and assessed the feasibility of these assays to aid identification of unknown grass samples. We confirmed that for our control set of 20 samples, on which the design of the PCR assays was based, the five primer combinations produced the expected results. Using these PCR assays in a 'blind test', we were able to identify 25 unknown grass samples with some restrictions. Species belonging to genera represented in our control set were all correctly identified to genus with one exception. Similarly, genera belonging to tribes in the control set were correctly identified to the tribal level. Finally, for those samples for which neither the tribal or genus specific PCR assays were designed, we could confidently exclude these samples from belonging to certain tribes and genera. The results confirmed the utility of the PCR assays and the feasibility of developing a robust full-scale usable grass identification system for forensic purposes.

  10. A Protein Standard That Emulates Homology for the Characterization of Protein Inference Algorithms.

    PubMed

    The, Matthew; Edfors, Fredrik; Perez-Riverol, Yasset; Payne, Samuel H; Hoopmann, Michael R; Palmblad, Magnus; Forsström, Björn; Käll, Lukas

    2018-05-04

    A natural way to benchmark the performance of an analytical experimental setup is to use samples of known composition and see to what degree one can correctly infer the content of such a sample from the data. For shotgun proteomics, one of the inherent problems of interpreting data is that the measured analytes are peptides and not the actual proteins themselves. As some proteins share proteolytic peptides, there might be more than one possible causative set of proteins resulting in a given set of peptides and there is a need for mechanisms that infer proteins from lists of detected peptides. A weakness of commercially available samples of known content is that they consist of proteins that are deliberately selected for producing tryptic peptides that are unique to a single protein. Unfortunately, such samples do not expose any complications in protein inference. Hence, for a realistic benchmark of protein inference procedures, there is a need for samples of known content where the present proteins share peptides with known absent proteins. Here, we present such a standard, that is based on E. coli expressed human protein fragments. To illustrate the application of this standard, we benchmark a set of different protein inference procedures on the data. We observe that inference procedures excluding shared peptides provide more accurate estimates of errors compared to methods that include information from shared peptides, while still giving a reasonable performance in terms of the number of identified proteins. We also demonstrate that using a sample of known protein content without proteins with shared tryptic peptides can give a false sense of accuracy for many protein inference methods.

  11. Marital assortment for genetic similarity.

    PubMed

    Eckman, Ronael E; Williams, Robert; Nagoshi, Craig

    2002-10-01

    The present study involved analyses of a Caucasian American sample (n=949) and a Japanese American sample (n=400) for factors supporting Genetic Similarity Theory (GST). The analyses found no evidence for the presence of genetic similarity between spouses in either sample for the blood group analyses of nine loci. All results indicated random mating for blood group genes. The results did not provide consistent substantial support to show that spousal similarity is correlated with the degree of genetic component of a trait for a set of seventeen individual differences variables, with only the Caucasian sample yielding significant correlations for this analysis. A third analysis examining the correlation between presence of spousal genetic similarity and spousal similarity on observable traits was not performed because spousal genetic similarity was not observed in either sample. The overall implication of the study is that GST is not supported as an explanation for spousal similarity in humans.

  12. Gravel Transport Measured With Bedload Traps in Mountain Streams: Field Data Sets to be Published

    NASA Astrophysics Data System (ADS)

    Bunte, K.; Swingle, K. W.; Abt, S. R.; Ettema, R.; Cenderelli, D. A.

    2017-12-01

    Direct, accurate measurements of coarse bedload transport exist for only a few streams worldwide, because the task is laborious and requires a suitable device. However, sets of accurate field data would be useful for reference with unsampled sites and as a basis for model developments. The authors have carefully measured gravel transport and are compiling their data sets for publication. To ensure accurate measurements of gravel bedload in wadeable flow, the designed instrument consisted of an unflared aluminum frame (0.3 x 0.2 m) large enough for entry of cobbles. The attached 1 m or longer net with a 4 mm mesh held large bedload volumes. The frame was strapped onto a ground plate anchored onto the channel bed. This setup avoided involuntary sampler particle pick-up and enabled long sampling times, integrating over fluctuating transport. Beveled plates and frames facilitated easy particle entry. Accelerating flow over smooth plates compensated for deceleration within the net. Spacing multiple frames by 1 m enabled sampling much of the stream width. Long deployment, and storage of sampled bedload away from the frame's entrance, were attributes of traps rather than samplers; hence the name "bedload traps". The authors measured gravel transport with 4-6 bedload traps per cross-section at 10 mountain streams in CO, WY, and OR, accumulating 14 data sets (>1,350 samples). In 10 data sets, measurements covered much of the snowmelt high-flow season yielding 50-200 samples. Measurement time was typically 1 hour but ranged from 3 minutes to 3 hours, depending on transport intensity. Measuring back-to-back provided 6 to 10 samples over a 6 to 10-hour field day. Bedload transport was also measured with a 3-inch Helley-Smith sampler. The data set provides fractional (0.5 phi) transport rates in terms of particle mass and number for each bedload trap in the cross-section, the largest particle size, as well as total cross-sectional gravel transport rates. Ancillary field data include stage, discharge, long-term flow records if available, surface and subsurface sediment sizes, as well as longitudinal and cross-sectional site surveys. Besides transport relations, incipient motion conditions, hysteresis, and lateral variation, the data provide a reliable modeling basis to test insights and hypotheses regarding bedload transport.

  13. Silicide phases formation in Co/c-Si and Co/a-Si systems during thermal annealing

    NASA Astrophysics Data System (ADS)

    Novaković, M.; Popović, M.; Zhang, K.; Lieb, K. P.; Bibić, N.

    2014-03-01

    The effect of the interface in cobalt-silicon bilayers on the silicide phase formation and microstructure has been investigated. Thin cobalt films were deposited by electron beam evaporation to a thickness of 50 nm on crystalline silicon (c-Si) or silicon with pre-amorphized surface (a-Si). After deposition one set of samples was annealed for 2 h at 200, 300, 400, 500, 600 and 700 °C. Another set of samples was irradiated with 400 keV Xe+ ions and then annealed at the same temperatures. Phase transitions were investigated with Rutherford backscattering spectroscopy, X-ray diffraction and cross-sectional transmission electron microscopy. No silicide formation was observed up to 400 °C, for both non-irradiated and ion-irradiated samples. When increasing the annealing temperature, the non-irradiated and irradiated Co/c-Si samples showed a similar behaviour: at 500 °C, CoSi appeared as the dominant silicide, followed by the formation of CoSi2 at 600 and 700 °C. In the case of non-irradiated Co/a-Si samples, no silicide formation occurred up to 700 °C, while irradiated samples with pre-amorphized substrate (Co/a-Si) showed a phase sequence similar to that in the Co/c-Si system. The observed phase transitions are found to be consistent with predictions of the effective heat of formation model.

  14. Ensemble representations: effects of set size and item heterogeneity on average size perception.

    PubMed

    Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W

    2013-02-01

    Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Evaluating common de-identification heuristics for personal health information.

    PubMed

    El Emam, Khaled; Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-11-21

    With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy.

  16. Evaluating Common De-Identification Heuristics for Personal Health Information

    PubMed Central

    Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-01-01

    Background With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. Objective The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Methods Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. Results The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Conclusions Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy. PMID:17213047

  17. Methodological framework for projecting the potential loss of intraspecific genetic diversity due to global climate change

    PubMed Central

    2012-01-01

    Background While research on the impact of global climate change (GCC) on ecosystems and species is flourishing, a fundamental component of biodiversity – molecular variation – has not yet received its due attention in such studies. Here we present a methodological framework for projecting the loss of intraspecific genetic diversity due to GCC. Methods The framework consists of multiple steps that combines 1) hierarchical genetic clustering methods to define comparable units of inference, 2) species accumulation curves (SAC) to infer sampling completeness, and 3) species distribution modelling (SDM) to project the genetic diversity loss under GCC. We suggest procedures for existing data sets as well as specifically designed studies. We illustrate the approach with two worked examples from a land snail (Trochulus villosus) and a caddisfly (Smicridea (S.) mucronata). Results Sampling completeness was diagnosed on the third coarsest haplotype clade level for T. villosus and the second coarsest for S. mucronata. For both species, a substantial species range loss was projected under the chosen climate scenario. However, despite substantial differences in data set quality concerning spatial sampling and sampling depth, no loss of haplotype clades due to GCC was predicted for either species. Conclusions The suggested approach presents a feasible method to tap the rich resources of existing phylogeographic data sets and guide the design and analysis of studies explicitly designed to estimate the impact of GCC on a currently still neglected level of biodiversity. PMID:23176586

  18. A laboratory procedure for measuring and georeferencing soil colour

    NASA Astrophysics Data System (ADS)

    Marques-Mateu, A.; Balaguer-Puig, M.; Moreno-Ramon, H.; Ibanez-Asensio, S.

    2015-04-01

    Remote sensing and geospatial applications very often require ground truth data to assess outcomes from spatial analyses or environmental models. Those data sets, however, may be difficult to collect in proper format or may even be unavailable. In the particular case of soil colour the collection of reliable ground data can be cumbersome due to measuring methods, colour communication issues, and other practical factors which lead to a lack of standard procedure for soil colour measurement and georeferencing. In this paper we present a laboratory procedure that provides colour coordinates of georeferenced soil samples which become useful in later processing stages of soil mapping and classification from digital images. The procedure requires a laboratory setup consisting of a light booth and a trichromatic colorimeter, together with a computer program that performs colour measurement, storage, and colour space transformation tasks. Measurement tasks are automated by means of specific data logging routines which allow storing recorded colour data in a spatial format. A key feature of the system is the ability of transforming between physically-based colour spaces and the Munsell system which is still the standard in soil science. The working scheme pursues the automation of routine tasks whenever possible and the avoidance of input mistakes by means of a convenient layout of the user interface. The program can readily manage colour and coordinate data sets which eventually allow creating spatial data sets. All the tasks regarding data joining between colorimeter measurements and samples locations are executed by the software in the background, allowing users to concentrate on samples processing. As a result, we obtained a robust and fully functional computer-based procedure which has proven a very useful tool for sample classification or cataloging purposes as well as for integrating soil colour data with other remote sensed and spatial data sets.

  19. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.)

    PubMed Central

    Amoroso, Gianluca; Ventura, Tomer; Elizur, Abigail; Carter, Chris G.

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general. PMID:27977809

  20. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.).

    PubMed

    Amoroso, Gianluca; Ventura, Tomer; Cobcroft, Jennifer M; Adams, Mark B; Elizur, Abigail; Carter, Chris G

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general.

  1. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  2. Probing the underlying physics of ejecta production from shocked Sn samples

    NASA Astrophysics Data System (ADS)

    Zellner, M. B.; McNeil, W. Vogan; Hammerberg, J. E.; Hixson, R. S.; Obst, A. W.; Olson, R. T.; Payton, J. R.; Rigg, P. A.; Routley, N.; Stevens, G. D.; Turley, W. D.; Veeser, L.; Buttler, W. T.

    2008-06-01

    This effort investigates the underlying physics of ejecta production for high explosive (HE) shocked Sn surfaces prepared with finishes typical to those roughened by tool marks left from machining processes. To investigate the physical mechanisms of ejecta production, we compiled and re-examined ejecta data from two experimental campaigns [W. S. Vogan et al., J. Appl. Phys. 98, 113508 (1998); M. B. Zellner et al., ibid. 102, 013522 (2007)] to form a self-consistent data set spanning a large parameter space. In the first campaign, ejecta created upon shock release at the back side of HE shocked Sn samples were characterized for samples with varying surface finishes but at similar shock-breakout pressures PSB. In the second campaign, ejecta were characterized for HE shocked Sn samples with a constant surface finish but at varying PSB.

  3. MicroSIFT Courseware Evaluations [Set 15 (362-388) and Set 16 (389-441), with an Index Listing the Contents of Each Set (Sets 1-16) and a Cumulative Subject Index (Sets 1-16)].

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This document consists of 80 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 15 consists of 27 packages; set 16 consists of 53 packages. Each software review lists producer, time and place of evaluation,…

  4. MicroSIFT Courseware Evaluations. [Set 11 (223-259), Set 12 (260-293), and a Special Set of 99 LIBRA Reviews of Junior High School Science Software, Including Subject and Title Indexes Covering Sets 1-12 and Special Set L].

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This document consists of 170 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 11 consists of 37 packages. Set 12 consists of 34 packages. A special unnumbered set, entitled LIBRA Reviews, treats 99 packages…

  5. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  6. Picking Deep Filter Responses for Fine-Grained Image Recognition (Open Access Author’s Manuscript)

    DTIC Science & Technology

    2016-12-16

    stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive... filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new...positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher

  7. Residence Halls Perceptions Study: A Report of the Perceptions of Students Regarding the Residence Halls at the University of South Carolina. Research Notes No. 33-76.

    ERIC Educational Resources Information Center

    Wertz, Richard D.; And Others

    In an effort to elicit student attitudes concerning residence hall living on campus a questionnaire was designed and administered to a random sample of 1,100 resident students at the University of South Carolina. The survey instrument consisted of a set of sixteen statements that required an "is" and a "should be" response. The…

  8. The Development of Open University New Generation Learning Model Using Research and Development for Atomic Physics Course PEFI4421

    ERIC Educational Resources Information Center

    Prayekti

    2017-01-01

    This research was aimed at developing printed teaching materials of Atomic Physics PEFI4421 Course using Research and Development (R & D) model; which consisted of three major set of activities. The first set consisted of seven stages, the second set consisted of one stage, and the third set consisted of seven stages. This research study was…

  9. Summary of results of frictional sliding studies, at confining pressures up to 6.98 kb, in selected rock materials

    USGS Publications Warehouse

    Summers, R.; Byerlee, J.

    1977-01-01

    This report is a collection of stress-strain charts which were produced by deforming selected simuiated fault gouge materials. Several sets of samples consisted of intact cylinders, 1.000 inch in diameter and 2.500 inches long. The majority of the samples consisted of thin layers of the selected sample material, inserted within a diagonal sawcut in a 1.000-inch by 2.500-inch Westerly Granite cylinder. Two sorts of inserts were used. The first consisted of thin wafers cut from 1.000-inch-diameter cores of the rock being tested. The other consisted of thin layers of crushed material packed onto the sawcut surface. In several groups of tests using various thicknesses (0.010 inch to 0.160 inch) of a given type material there were variations in the stress level and/or stability of sliding as a function of the fault zone width. Because of this we elected to use a standard 0.025-inch width fault zone to compare the frictional properties of many of the different types of rock materials. This 0.025-inch thickness was chosen partially because this thickness of crushed granite behaves approximately the same as a fractured sample of initially intact granite, and also because this is near the lower limit at which we could cut intact wafers for those samples that were prepared from thin slices of rock. One series of tests was done with saw cut granite cylinders without fault gouge inserts. All of these tests were done in a hydraulically operated triaxial testing machine. The confining pressure (δ1, least principal stress) was applied by pumping petroleum ether into a pressure vessel. The differential stress (δ3-δ1) was applied by a hydraulically operated ram that could be advanced into the pressure vessel at any of several strain rates (10-4sec-1, 10-5sec-1, 10-6sec-1, 10-7sec-1, or 10-8sec-1). All samples were jacketed in polyurethane tubing to exclude the confining pressure medium from the samples. The majority of the samples, with the exception of some of the initially intact rocks, also had thin copper jackets. These served to hold the saw cut parts of the granite sample holders in alignment while the samples were handled and pushed into the polyurethane jackets.

  10. Age-related changes in the anticipatory coarticulation in the speech of young children

    NASA Astrophysics Data System (ADS)

    Parson, Mathew; Lloyd, Amanda; Stoddard, Kelly; Nissen, Shawn L.

    2003-10-01

    This paper investigates the possible patterns of anticipatory coarticulation in the speech of young children. Speech samples were elicited from three groups of children between 3 and 6 years of age and one comparison group of adults. The utterances were recorded online in a quiet room environment using high quality microphones and direct analog-to-digital conversion to computer disk. Formant frequency measures (F1, F2, and F3) were extracted from a centralized and unstressed vowel (schwa) spoken prior to two different sets of productions. The first set of productions consisted of the target vowel followed by a series of real words containing an initial CV(C) syllable (voiceless obstruent-monophthongal vowel) in a range of phonetic contexts, while the second set consisted of a series of nonword productions with a relatively constrained phonetic context. An analysis of variance was utilized to determine if the formant frequencies varied systematically as a function of age, gender, and phonetic context. Results will also be discussed in association with spectral moment measures extracted from the obstruent segment immediately following the target vowel. [Work supported by research funding from Brigham Young University.

  11. Distribution of verbal and physical violence for same and opposite genders among adolescents.

    PubMed

    Winstok, Zeev; Enosh, Guy

    2008-09-01

    The present study was set up to test the perceived distribution of verbal and physical violent behaviors among same- and opposite-genders. More specifically, those perceived violent behaviors are examined as the outcome of adolescents' cost-risk goals. The study assumes two conflicting social goals: Whereas the goal of risk reduction may motivate withdrawal from conflict, and decrease the prevalence of violent events, the goal of pursuing social status may motivate initiation and/or retaliation, thus increasing the prevalence of violence. The study is based on a sample of 155 high-school students that recorded the frequency of observing violent events in their peer group over a one-week period. Findings demonstrate that for males, opponent gender had a primary effect on violence distribution. Males exhibited violence against males more frequently than against females. This result is consistent with the assumption that males set a higher priority to pursuing social status. For females, verbal violence was more frequent than physical forms of aggression. This is consistent with the assumption that females set a higher priority on avoiding risk. These results are discussed from an evolutionary cost-risk perspective.

  12. A standard bacterial isolate set for research on contemporary dairy spoilage.

    PubMed

    Trmčić, A; Martin, N H; Boor, K J; Wiedmann, M

    2015-08-01

    Food spoilage is an ongoing issue that could be dealt with more efficiently if some standardization and unification was introduced in this field of research. For example, research and development efforts to understand and reduce food spoilage can greatly be enhanced through availability and use of standardized isolate sets. To address this critical issue, we have assembled a standard isolate set of dairy spoilers and other selected nonpathogenic organisms frequently associated with dairy products. This publicly available bacterial set consists of (1) 35 gram-positive isolates including 9 Bacillus and 15 Paenibacillus isolates and (2) 16 gram-negative isolates including 4 Pseudomonas and 8 coliform isolates. The set includes isolates obtained from samples of pasteurized milk (n=43), pasteurized chocolate milk (n=1), raw milk (n=1), cheese (n=2), as well as isolates obtained from samples obtained from dairy-powder production (n=4). Analysis of growth characteristics in skim milk broth identified 16 gram-positive and 13 gram-negative isolates as psychrotolerant. Additional phenotypic characterization of isolates included testing for activity of β-galactosidase and lipolytic and proteolytic enzymes. All groups of isolates included in the isolate set exhibited diversity in growth and enzyme activity. Source data for all isolates in this isolate set are publicly available in the FoodMicrobeTracker database (http://www.foodmicrobetracker.com), which allows for continuous updating of information and advancement of knowledge on dairy-spoilage representatives included in this isolate set. This isolate set along with publicly available isolate data provide a unique resource that will help advance knowledge of dairy-spoilage organisms as well as aid industry in development and validation of new control strategies. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Potential metabolite markers of schizophrenia.

    PubMed

    Yang, J; Chen, T; Sun, L; Zhao, Z; Qi, X; Zhou, K; Cao, Y; Wang, X; Qiu, Y; Su, M; Zhao, A; Wang, P; Yang, P; Wu, J; Feng, G; He, L; Jia, W; Wan, C

    2013-01-01

    Schizophrenia is a severe mental disorder that affects 0.5-1% of the population worldwide. Current diagnostic methods are based on psychiatric interviews, which are subjective in nature. The lack of disease biomarkers to support objective laboratory tests has been a long-standing bottleneck in the clinical diagnosis and evaluation of schizophrenia. Here we report a global metabolic profiling study involving 112 schizophrenic patients and 110 healthy subjects, who were divided into a training set and a test set, designed to identify metabolite markers. A panel of serum markers consisting of glycerate, eicosenoic acid, β-hydroxybutyrate, pyruvate and cystine was identified as an effective diagnostic tool, achieving an area under the receiver operating characteristic curve (AUC) of 0.945 in the training samples (62 patients and 62 controls) and 0.895 in the test samples (50 patients and 48 controls). Furthermore, a composite panel by the addition of urine β-hydroxybutyrate to the serum panel achieved a more satisfactory accuracy, which reached an AUC of 1 in both the training set and the test set. Multiple fatty acids and ketone bodies were found significantly (P<0.01) elevated in both the serum and urine of patients, suggesting an upregulated fatty acid catabolism, presumably resulting from an insufficiency of glucose supply in the brains of schizophrenia patients.

  14. Development and validation of a Response Bias Scale (RBS) for the MMPI-2.

    PubMed

    Gervais, Roger O; Ben-Porath, Yossef S; Wygant, Dustin B; Green, Paul

    2007-06-01

    This study describes the development of a Minnesota Multiphasic Personality Inventory (MMPI-2) scale designed to detect negative response bias in forensic neuropsychological or disability assessment settings. The Response Bias Scale (RBS) consists of 28 MMPI-2 items that discriminated between persons who passed or failed the Word Memory Test (WMT), Computerized Assessment of Response Bias (CARB), and/or Test of Memory Malingering (TOMM) in a sample of 1,212 nonhead-injury disability claimants. Incremental validity of the RBS was evaluated by comparing its ability to detect poor performance on four separate symptom validity tests with that of the F and F(P) scales and the Fake Bad Scale (FBS). The RBS consistently outperformed F, F(P), and FBS. Study results suggest that the RBS may be a useful addition to existing MMPI-2 validity scales and indices in detecting symptom complaints predominantly associated with cognitive response bias and overreporting in forensic neuropsychological and disability assessment settings.

  15. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  16. A Finnish validation study of the SCL-90.

    PubMed

    Holi, M M; Sammallahti, P R; Aalberg, V A

    1998-01-01

    The Symptom Check-List-90 (SCL-90) is a widely used psychiatric questionnaire which has not yet been validated in Finland. We investigated the utility of the translated version of the SCL-90 in the Finnish population, and set community norms for it. The internal consistency of the original subscales was checked and found to be good. Discriminant function analysis, based on the nine original subscales, showed that the power of the SCL-90 to discriminate between patients and the community is good. Factor analysis of the items of the questionnaire yielded a very strong unrotated first factor, suggesting that a general factor may be present. This together with the fact that high intercorrelations were found between the nine original subscales suggests that the instrument is not multidimensional. The SCL-90 may be useful in a research setting as an instrument for measuring the change in symptomatic distress, or as a screening instrument. The American community norms should be used with caution, as the Finnish community sample scored consistently higher on all subscales.

  17. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  18. Trace elemental analysis of glass and paint samples of forensic interest by ICP-MS using laser ablation solid sample introduction

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Trejos, Tatiana; Hobbs, Andria; Furton, Kenneth G.

    2003-09-01

    The importance of small amounts of glass and paint evidence as a means to associate a crime event to a suspect or a suspect to another individual has been demonstrated in many cases. Glass is a fragile material that is often found at the scenes of crimes such as burglaries, hit-and-run accidents and violent crime offenses. Previous work has demonstrated the utility of elemental analysis by solution ICP-MS of small amounts of glass for the comparison between a fragment found at a crime scene to a possible source of the glass. The multi-element capability and the sensitivity of ICP-MS combined with the simplified sample introduction of laser ablation prior to ion detection provides for an excellent and relatively non-destructive technique for elemental analysis of glass fragments. The direct solid sample introduction technique of laser ablation (LA) is reported as an alternative to the solution method. Direct solid sampling provides several advantages over solution methods and shows great potential for a number of solid sample analyses in forensic science. The advantages of laser ablation include the simplification of sample preparation, thereby reducing the time and complexity of the analysis, the elimination of handling acid dissolution reagents such as HF and the reduction of sources of interferences in the ionization plasma. Direct sampling also provides for essentially "non-destructive" sampling due to the removal of very small amounts of sample needed for analysis. The discrimination potential of LA-ICP-MS is compared with previously reported solution ICP-MS methods using external calibration with internal standardization and a newly reported solution isotope dilution (ID) method. A total of ninety-one different glass samples were used for the comparison study using the techniques mentioned. One set consisted of forty-five headlamps taken from a variety of automobiles representing a range of twenty years of manufacturing dates. A second set consisted of forty-six automotive glasses (side windows and windshields) representing casework glass from different vehicle manufacturers over several years was also characterized by RI and elemental composition analysis. The solution sample introduction techniques (external calibration and isotope dilution) provide for excellent sensitivity and precision but have the disadvantages of destroying the sample and also involve complex sample preparation. The laser ablation method was simpler, faster and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can provide for an excellent alternative to solution analysis of glass in forensic casework samples. Paints and coatings are frequently encountered as trace evidence samples submitted to forensic science laboratories. A LA-ICP-MS method has been developed to complement the commonly used techniques in forensic laboratories in order to better characterize these samples for forensic purposes. Time-resolved plots of each sample can be compared to associate samples to each other or to discriminate between samples. Additionally, the concentration of lead and the ratios of other elements have been determined in various automotive paints by the reported method. A sample set of eighteen (18) survey automotive paint samples have been analyzed with the developed method in order to determine the utility of LA-ICP-MS and to compare the method to the more commonly used scanning electron microscopy (SEM) method for elemental characterization of paint layers in forensic casework.

  19. The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: first measurement of baryon acoustic oscillations between redshift 0.8 and 2.2

    NASA Astrophysics Data System (ADS)

    Ata, Metin; Baumgarten, Falk; Bautista, Julian; Beutler, Florian; Bizyaev, Dmitry; Blanton, Michael R.; Blazek, Jonathan A.; Bolton, Adam S.; Brinkmann, Jonathan; Brownstein, Joel R.; Burtin, Etienne; Chuang, Chia-Hsun; Comparat, Johan; Dawson, Kyle S.; de la Macorra, Axel; Du, Wei; du Mas des Bourboux, Hélion; Eisenstein, Daniel J.; Gil-Marín, Héctor; Grabowski, Katie; Guy, Julien; Hand, Nick; Ho, Shirley; Hutchinson, Timothy A.; Ivanov, Mikhail M.; Kitaura, Francisco-Shu; Kneib, Jean-Paul; Laurent, Pierre; Le Goff, Jean-Marc; McEwen, Joseph E.; Mueller, Eva-Maria; Myers, Adam D.; Newman, Jeffrey A.; Palanque-Delabrouille, Nathalie; Pan, Kaike; Pâris, Isabelle; Pellejero-Ibanez, Marcos; Percival, Will J.; Petitjean, Patrick; Prada, Francisco; Prakash, Abhishek; Rodríguez-Torres, Sergio A.; Ross, Ashley J.; Rossi, Graziano; Ruggeri, Rossana; Sánchez, Ariel G.; Satpathy, Siddharth; Schlegel, David J.; Schneider, Donald P.; Seo, Hee-Jong; Slosar, Anže; Streblyanska, Alina; Tinker, Jeremy L.; Tojeiro, Rita; Vargas Magaña, Mariana; Vivek, M.; Wang, Yuting; Yèche, Christophe; Yu, Liang; Zarrouk, Pauline; Zhao, Cheng; Zhao, Gong-Bo; Zhu, Fangzhou

    2018-02-01

    We present measurements of the Baryon Acoustic Oscillation (BAO) scale in redshift-space using the clustering of quasars. We consider a sample of 147 000 quasars from the extended Baryon Oscillation Spectroscopic Survey (eBOSS) distributed over 2044 square degrees with redshifts 0.8 < z < 2.2 and measure their spherically averaged clustering in both configuration and Fourier space. Our observational data set and the 1400 simulated realizations of the data set allow us to detect a preference for BAO that is greater than 2.8σ. We determine the spherically averaged BAO distance to z = 1.52 to 3.8 per cent precision: DV(z = 1.52) = 3843 ± 147(rd/rd, fid)Mpc. This is the first time the location of the BAO feature has been measured between redshifts 1 and 2. Our result is fully consistent with the prediction obtained by extrapolating the Planck flat ΛCDM best-fitting cosmology. All of our results are consistent with basic large-scale structure (LSS) theory, confirming quasars to be a reliable tracer of LSS, and provide a starting point for numerous cosmological tests to be performed with eBOSS quasar samples. We combine our result with previous, independent, BAO distance measurements to construct an updated BAO distance-ladder. Using these BAO data alone and marginalizing over the length of the standard ruler, we find ΩΛ > 0 at 6.6σ significance when testing a ΛCDM model with free curvature.

  20. In silico pathway analysis in cervical carcinoma reveals potential new targets for treatment

    PubMed Central

    van Dam, Peter A.; van Dam, Pieter-Jan H. H.; Rolfo, Christian; Giallombardo, Marco; van Berckelaer, Christophe; Trinh, Xuan Bich; Altintas, Sevilay; Huizing, Manon; Papadimitriou, Kostas; Tjalma, Wiebren A. A.; van Laere, Steven

    2016-01-01

    An in silico pathway analysis was performed in order to improve current knowledge on the molecular drivers of cervical cancer and detect potential targets for treatment. Three publicly available Affymetrix gene expression data-sets (GSE5787, GSE7803, GSE9750) were retrieved, vouching for a total of 9 cervical cancer cell lines (CCCLs), 39 normal cervical samples, 7 CIN3 samples and 111 cervical cancer samples (CCSs). Predication analysis of microarrays was performed in the Affymetrix sets to identify cervical cancer biomarkers. To select cancer cell-specific genes the CCSs were compared to the CCCLs. Validated genes were submitted to a gene set enrichment analysis (GSEA) and Expression2Kinases (E2K). In the CCSs a total of 1,547 probe sets were identified that were overexpressed (FDR < 0.1). Comparing to CCCLs 560 probe sets (481 unique genes) had a cancer cell-specific expression profile, and 315 of these genes (65%) were validated. GSEA identified 5 cancer hallmarks enriched in CCSs (P < 0.01 and FDR < 0.25) showing that deregulation of the cell cycle is a major component of cervical cancer biology. E2K identified a protein-protein interaction (PPI) network of 162 nodes (including 20 drugable kinases) and 1626 edges. This PPI-network consists of 5 signaling modules associated with MYC signaling (Module 1), cell cycle deregulation (Module 2), TGFβ-signaling (Module 3), MAPK signaling (Module 4) and chromatin modeling (Module 5). Potential targets for treatment which could be identified were CDK1, CDK2, ABL1, ATM, AKT1, MAPK1, MAPK3 among others. The present study identified important driver pathways in cervical carcinogenesis which should be assessed for their potential therapeutic drugability. PMID:26701206

  1. A mixture model-based approach to the clustering of microarray expression data.

    PubMed

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  2. Factor structure of the Psychiatric Diagnostic Screening Questionnaire (PDSQ), a screening questionnaire for DSM-IV axis I disorders.

    PubMed

    Sheeran, T; Zimmerman, M

    2004-03-01

    We examined the factor structure of the Psychiatric Diagnostic Screening Questionnaire (PDSQ), a 125-item self-report scale that screens for 15 of the most common Axis I psychiatric disorders for which patients seek treatment in outpatient settings. The sample consisted of 2440 psychiatric outpatients. Thirteen factors were extracted. Ten mapped directly onto the DSM-IV diagnosis for which they were designed and one represented suicidal ideation. The remaining two factors reflected closely related disorders: Panic Disorder/Agoraphobia, and Somatization/Hypochondriasis. A psychosis factor was not extracted. Overall, the factor structure of the PDSQ was consistent with the DSM-IV nosology upon which it was developed.

  3. APIC position paper: safe injection, infusion, and medication vial practices in health care.

    PubMed

    Dolan, Susan A; Felizardo, Gwenda; Barnes, Sue; Cox, Tracy R; Patrick, Marcia; Ward, Katherine S; Arias, Kathleen Meehan

    2010-04-01

    Outbreaks involving the transmission of bloodborne pathogens or other microbial pathogens to patients in various types of health care settings due to unsafe injection, infusion, and medication vial practices are unacceptable. Each of the outbreaks could have been prevented by the use of proper aseptic technique in conjunction with basic infection prevention practices for handling parenteral medications, administration of injections, and procurement and sampling of blood. This document provides practice guidance for health care facilities on essential safe injection, infusion, and vial practices that should be consistently implemented in such settings. 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  4. Air Flow and Pressure Drop Measurements Across Porous Oxides

    NASA Technical Reports Server (NTRS)

    Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.

    2008-01-01

    This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.

  5. Auditory proactive interference in monkeys: The role of stimulus set size and intertrial interval

    PubMed Central

    Bigelow, James; Poremba, Amy

    2013-01-01

    We conducted two experiments to examine the influence of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (non-match trials). In Experiment 1, we randomly assigned a stimulus set size of 2, 4, 8, 16, 32, 64, or 192 (trial unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect “same” responses on non-match trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved by increasing the ITI from 5 to 10 s, but the 10 and 20 s conditions were the same. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false “match” responses on non-match trials. Taken together, Experiments 1 and 2 show that auditory short-term memory in monkeys is highly susceptible to PI caused by stimulus repetition. Additional analyses from Experiment 1 suggest that monkeys may make same/different judgments based on a familiarity criterion that is adjusted by error-related feedback. PMID:23526232

  6. Evaluating data mining algorithms using molecular dynamics trajectories.

    PubMed

    Tatsis, Vasileios A; Tjortjis, Christos; Tzirakis, Panagiotis

    2013-01-01

    Molecular dynamics simulations provide a sample of a molecule's conformational space. Experiments on the mus time scale, resulting in large amounts of data, are nowadays routine. Data mining techniques such as classification provide a way to analyse such data. In this work, we evaluate and compare several classification algorithms using three data sets which resulted from computer simulations, of a potential enzyme mimetic biomolecule. We evaluated 65 classifiers available in the well-known data mining toolkit Weka, using 'classification' errors to assess algorithmic performance. Results suggest that: (i) 'meta' classifiers perform better than the other groups, when applied to molecular dynamics data sets; (ii) Random Forest and Rotation Forest are the best classifiers for all three data sets; and (iii) classification via clustering yields the highest classification error. Our findings are consistent with bibliographic evidence, suggesting a 'roadmap' for dealing with such data.

  7. Identifying Early Childhood Personality Dimensions Using the California Child Q-Set and Prospective Associations With Behavioral and Psychosocial Development

    PubMed Central

    Wilson, Sylia; Schalet, Benjamin D.; Hicks, Brian M.; Zucker, Robert A.

    2013-01-01

    The present study used an empirical, “bottom-up” approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood. PMID:24223448

  8. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .

  9. Targeted quantitative analysis of Streptococcus pyogenes virulence factors by multiple reaction monitoring.

    PubMed

    Lange, Vinzenz; Malmström, Johan A; Didion, John; King, Nichole L; Johansson, Björn P; Schäfer, Juliane; Rameseder, Jonathan; Wong, Chee-Hong; Deutsch, Eric W; Brusniak, Mi-Youn; Bühlmann, Peter; Björck, Lars; Domon, Bruno; Aebersold, Ruedi

    2008-08-01

    In many studies, particularly in the field of systems biology, it is essential that identical protein sets are precisely quantified in multiple samples such as those representing differentially perturbed cell states. The high degree of reproducibility required for such experiments has not been achieved by classical mass spectrometry-based proteomics methods. In this study we describe the implementation of a targeted quantitative approach by which predetermined protein sets are first identified and subsequently quantified at high sensitivity reliably in multiple samples. This approach consists of three steps. First, the proteome is extensively mapped out by multidimensional fractionation and tandem mass spectrometry, and the data generated are assembled in the PeptideAtlas database. Second, based on this proteome map, peptides uniquely identifying the proteins of interest, proteotypic peptides, are selected, and multiple reaction monitoring (MRM) transitions are established and validated by MS2 spectrum acquisition. This process of peptide selection, transition selection, and validation is supported by a suite of software tools, TIQAM (Targeted Identification for Quantitative Analysis by MRM), described in this study. Third, the selected target protein set is quantified in multiple samples by MRM. Applying this approach we were able to reliably quantify low abundance virulence factors from cultures of the human pathogen Streptococcus pyogenes exposed to increasing amounts of plasma. The resulting quantitative protein patterns enabled us to clearly define the subset of virulence proteins that is regulated upon plasma exposure.

  10. Alchemical prediction of hydration free energies for SAMPL

    PubMed Central

    Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.

    2013-01-01

    Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475

  11. Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Gladman, Brett; Shankman, Cory; OSSOS Collaboration

    2017-10-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  12. OSSOS. VI. Striking Biases in the Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Shankman, Cory; Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett J.; Lawler, Samantha M.; Chen, Ying-Tung; Jakubik, Marian; Kaib, Nathan; Alexandersen, Mike; Gwyn, Stephen D. J.; Petit, Jean-Marc; Volk, Kathryn

    2017-08-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada–France–Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  13. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    NASA Astrophysics Data System (ADS)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  14. Exposure data from multi-application, multi-industry maintenance of surfaces and joints sealed with asbestos-containing gaskets and packing.

    PubMed

    Boelter, Fred; Simmons, Catherine; Hewett, Paul

    2011-04-01

    Fluid sealing devices (gaskets and packing) containing asbestos are manufactured and blended with binders such that the asbestos fibers are locked in a matrix that limits the potential for fiber release. Occasionally, fluid sealing devices fail and need to be replaced or are removed during preventive maintenance activities. This is the first study known to pool over a decade's worth of exposure assessments involving fluid sealing devices used in a variety of applications. Twenty-one assessments of work activities and air monitoring were performed under conditions with no mechanical ventilation and work scenarios described as "worst-case" conditions. Frequently, the work was conducted using aggressive techniques, along with dry removal practices. Personal and area samples were collected and analyzed in accordance with the National Institute for Occupational Safety and Health Methods 7400 and 7402. A total of 782 samples were analyzed by phase contrast microscopy, and 499 samples were analyzed by transmission electron microscopy. The statistical data analysis focused on the overall data sets which were personal full-shift time-weighted average (TWA) exposures, personal 30-min exposures, and area full-shift TWA values. Each data set contains three estimates of exposure: (1) total fibers; (2) asbestos fibers only but substituting a value of 0.0035 f/cc for censored data; and (3) asbestos fibers only but substituting the limit of quantification value for censored data. Censored data in the various data sets ranged from 7% to just over 95%. Because all the data sets were censored, the geometric mean and geometric standard deviation were estimated using the maximum likelihood estimation method. Nonparametric, Kaplan-Meier, and lognormal statistics were applied and found to be consistent and reinforcing. All three sets of statistics suggest that the mean and median exposures were less than 25% of 0.1 f/cc 8-hr TWA sample or 1.0 f/cc 30-min samples, and that there is at least 95% confidence that the true 95th percentile exposures are less than 0.1 f/cc as an 8-hr TWA.

  15. The effects of inference method, population sampling, and gene sampling on species tree inferences: an empirical study in slender salamanders (Plethodontidae: Batrachoseps).

    PubMed

    Jockusch, Elizabeth L; Martínez-Solano, Iñigo; Timpe, Elizabeth K

    2015-01-01

    Species tree methods are now widely used to infer the relationships among species from multilocus data sets. Many methods have been developed, which differ in whether gene and species trees are estimated simultaneously or sequentially, and in how gene trees are used to infer the species tree. While these methods perform well on simulated data, less is known about what impacts their performance on empirical data. We used a data set including five nuclear genes and one mitochondrial gene for 22 species of Batrachoseps to compare the effects of method of analysis, within-species sampling and gene sampling on species tree inferences. For this data set, the choice of inference method had the largest effect on the species tree topology. Exclusion of individual loci had large effects in *BEAST and STEM, but not in MP-EST. Different loci carried the greatest leverage in these different methods, showing that the causes of their disproportionate effects differ. Even though substantial information was present in the nuclear loci, the mitochondrial gene dominated the *BEAST species tree. This leverage is inherent to the mtDNA locus and results from its high variation and lower assumed ploidy. This mtDNA leverage may be problematic when mtDNA has undergone introgression, as is likely in this data set. By contrast, the leverage of RAG1 in STEM analyses does not reflect properties inherent to the locus, but rather results from a gene tree that is strongly discordant with all others, and is best explained by introgression between distantly related species. Within-species sampling was also important, especially in *BEAST analyses, as shown by differences in tree topology across 100 subsampled data sets. Despite the sensitivity of the species tree methods to multiple factors, five species groups, the relationships among these, and some relationships within them, are generally consistently resolved for Batrachoseps. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Character of High Temperature Mylonitic Shear Zones Associated with Oceanic Detachment Faults at the Ultra-Slow Mid-Cayman Rise

    NASA Astrophysics Data System (ADS)

    Marr, C.; John, B. E.; Cheadle, M. J.; German, C. R.

    2014-12-01

    Two well-preserved core complexes at the Mid-Cayman Rise (MCR), Mt Dent and Mt Hudson, provide an opportunity to examine the deformation history and rheology of detachment faults at an ultra-slow spreading ridge. Samples from the CAYTROUGH (1976-77) project and the Nautilus NA034 cruise (2013) were selected for detailed petrographic and microstructural study. Surface samples from Mt. Dent (near the center of the MCR) provide insight into lateral variation in footwall rock type and deformation history across a core complex in both the across and down dip directions. In contrast, sampling of Mt. Hudson (SE corner of the MCR) focuses on a high-angle, crosscutting normal fault scarp, which provides a cross section of the detachment fault system. Sampling across Mt Dent reveals that the footwall is composed of heterogeneously-distributed gabbro (47%) and peridotite (20%) with basaltic cover (33%) dominating the top of the core complex. Sampling of Mt Hudson is restricted to the normal fault scarp cutting the core complex and suggests the interior is dominated by gabbro (85% gabbro, 11% peridotite, 4% basalt). At Mt. Dent, peridotite is exposed within ~4km of the breakaway indicating that the Mt. Dent detachment does not cut Penrose-style oceanic crust. The sample set provides evidence of a full down-temperature sequence of detachment related-fault rocks, from possible granulite and clear amphibolite mylonitizatization to prehnite-pumpellyite brittle deformation. Both detachments show low-temperature brittle deformation overprinting higher temperature plastic fabrics. Fe-Ti oxide gabbro mylonites dominate the sample set, and plastic deformation of plagioclase is recorded in samples collected as near as ~4km from the inferred breakaway along the southern flank of Mt. Dent, suggesting the brittle-plastic transition was initially at ~3km depth. Recovered samples suggest strain associated with both detachment systems is localized into discrete mylonitic shear zones (~1-10cm thick), implying that the plastic portion of the fault consists of a broad zone of thin, anastomosing shear zones. Concentrations of Ti-rich magmatic hornblende and interstitial Fe-Ti oxides in the high strain horizons are consistent with the lowermost part of the fault(s) localizing in the margins of the mush zone of a shallow magma chamber.

  17. Gluten contamination of naturally gluten-free flours and starches used by Canadians with celiac disease.

    PubMed

    Koerner, Terence B; Cleroux, Chantal; Poirier, Christine; Cantin, Isabelle; La Vieille, Sébastien; Hayward, Stephen; Dubois, Sheila

    2013-01-01

    A large national investigation into the extent of gluten cross-contamination of naturally gluten-free ingredients (flours and starches) sold in Canada was performed. Samples (n = 640) were purchased from eight Canadian cities and via the internet during the period 2010-2012 and analysed for gluten contamination. The results showed that 61 of the 640 (9.5%) samples were contaminated above the Codex-recommended maximum level for gluten-free products (20 mg kg⁻¹) with a range of 5-7995 mg kg⁻¹. For the ingredients that were labelled gluten-free the contamination range (5-141 mg kg⁻¹) and number of samples were lower (3 of 268). This picture was consistent over time, with approximately the same percentage of samples above 20 mg kg⁻¹ in both the initial set and the subsequent lot. Looking at the total mean (composite) contamination for specific ingredients the largest and most consistent contaminations come from higher fibre ingredients such as soy (902 mg kg⁻¹), millet (272 mg kg⁻¹) and buckwheat (153 mg kg⁻¹). Of the naturally gluten-free flours and starches tested that do not contain a gluten-free label, the higher fibre ingredients would constitute the greatest probability of being contaminated with gluten above 20 mg kg⁻¹.

  18. Rapid assessment of agents of biological terrorism: defining the differential diagnosis of inhalational anthrax using electronic communication in a practice-based research network.

    PubMed

    Temte, Jonathan L; Anderson, Anna Lisa

    2004-01-01

    Early detection of bioterrorism requires assessment of diagnoses assigned to cases of rare diseases with which clinicians have little experience. In this study, we evaluated the process of defining the differential diagnosis for inhalational anthrax using electronic communication within a practice-based research network (PBRN) and compared the results with those obtained from a nationwide random sample of family physicians with a mailed instrument. We distributed survey instruments by e-mail to 55 physician members of the Wisconsin Research Network (WReN), a regional PBRN. The instruments consisted of 3 case vignettes randomly drawn from a set describing 11 patients with inhalational anthrax, 2 with influenza A, and 1 with Legionella pneumonia. Physicians provided their most likely nonanthrax diagnosis, along with their responses to 4 yes-or-no management questions for each case. Physicians who had not responded at 1 week received a second e-mail with the survey instrument. The comparison group consisted of the nationwide sample of physicians who completed mailed survey instruments. Primary outcome measures were response rate, median response time, and frequencies of diagnostic categories assigned to cases of inhalational anthrax. The PBRN response rate compared favorably with that of the national sample (47.3% vs 37.0%; P = not significant). The median response time for the PBRN was significantly shorter than that for the national sample (2 vs 28 days; P < .001). No significant differences were found between the PBRN and the Midwest subset of the national sample in the frequencies of major diagnostic categories or in case management. Electronic means of creating differential diagnoses for rare infectious diseases of national significance is feasible within PBRNs. Information is much more rapidly acquired and is consistent with that obtained by conventional methods.

  19. Multicommuted flow system for the determination of glucose in animal blood serum exploiting enzymatic reaction and chemiluminescence detection

    PubMed Central

    Pires, Cherrine K.; Martelli, Patrícia B.; Lima, José L. F. C.; Saraiva, Maria Lúcia M. F. S.

    2003-01-01

    An automatic flow procedure based on multicommutation dedicated for the determination of glucose in animal blood serum using glucose oxidase with chemiluminescence detection is described. The flow manifold consisted of a set of three-way solenoid valves assembled to implement multicommutation. A microcomputer furnished with an electronic interface and software written in Quick BASIC 4.5 controlled the manifold and performed data acquisition. Glucose oxidase was immobilized on porous silica beads (glass aminopropyl) and packed in a minicolumn (15 × 5 mm). The procedure was based on the enzymatic degradation of glucose, producing hydrogen peroxide, which oxidized luminol in the presence of hexacyanoferrate(III), causing the chemiluminescence. The system was tested by analysing a set of serum animal samples without previous treatment. Results were in agreement with those obtained with the conventional method (LABTEST Kit) at the 95% confidence level. The detection limit and variation coefficient were estimated as 12.0 mg l−1 (99.7% confidence level) and 3.5% (n = 20), respectively. The sampling rate was about 60 determinations h−1 with sample concentrations ranging from 50 to 600 mg l−1 glucose. The consumptions of serum sample, hexacyanoferrate(III) and luminol were 46 μl, 10.0 mg and 0.2 mg/determination, respectively. PMID:18924619

  20. Effect of crystallinity and irradiation on thermal properties and specific heat capacity of LDPE & LDPE/EVA.

    PubMed

    Borhani zarandi, Mahmoud; Amrollahi Bioki, Hojjat; Mirbagheri, Zahra-alsadat; Tabbakh, Farshid; Mirjalili, Ghazanfar

    2012-01-01

    In this paper a series of low-density polyethylene (LDPE) blends with different percentages (10%, 20%, and 30%) of EVA and sets of low-density polyethylene sheets were prepared. This set consists of four subsets, which were made under different cooling methods: fast cooling in liquid nitrogen, cooling with cassette, exposing in open air, and cooling in oven, to investigate the crystallinity effects. All of the samples were irradiated with 10MeV electron-beam in the dose range of 0-250kGy using a Rhodotron accelerator system. The variation of thermal conductivity (k) and specific heat capacity (C(p)) of all of the samples were measured. We found that, for the absorption dose less than 150kGy, k of the LDPE samples at a prescribed temperature range decreased by increasing the amount of dose, but then the change is insignificant. With increasing the crystallinity, k of the LDPE samples increased, whereas C(p) of this material is decreased. In the case of LDPE/EVA blends, for the dose less than 150kGy, C(p) (at 40°C) and k (in average) decreased, but then the change is insignificant. With increasing the amount of additive (EVA), C(p) and k increased. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Paleoenvironmental reconstruction based on palynofacies analyses of the Cansona Formation (Late Cretaceous), Sinú-San Jacinto Basin, northwest Colombia

    NASA Astrophysics Data System (ADS)

    Juliao-Lemus, Tatiana; Carvalho, Marcelo de Araujo; Torres, Diego; Plata, Angelo; Parra, Carlos

    2016-08-01

    To reconstruct the paleoenvironments of the Cansona Formation, a Cretaceous succession in Colombia that has controversial paleoenvironmental interpretation, occasionally deep marine and occasionally shallow marine, palynofacies analyses were conducted on 93 samples from four sections of the Sinú San Jacinto Basin in the north, midwest, and southwest sectors. For the palynofacies analyses, the kerogen categories were counted and subjected to cluster analyses. Four palynofacies associations were revealed for the four sections: Palynofacies Association I (PA I), which consisted of microforaminiferal linings, scolecodonts, dinoflagellate cysts, pollen grains, and fungi hyphae; PA II, which consisted of phytoclast translucent non-biostructured and biostructured, opaque phytoclasts (equidimensional and lath shaped); PA III, which consisted of pseudoamorphous particles, cuticles, resin, and fungal spores; and PA IV, which consisted of fluorescent and non-fluorescent amorphous organic matter and the fresh-water algae Botryococcus. In contrast to early studies that suggested a generalization of the depositional environment for the Cansona Formation (deep or shallow conditions), this study suggests that the formation reflects conspicuous stratigraphic and lateral changes and hence different depositional environments. The Cerro Cansona (CC4 section) and Chalán (AP section) areas are a more marine proximal settings (Early Campanian-Maastrichtian), and there is an intermediate setting for the Lorica area (SC section) and deeper conditions for the Montería area (CP2 section).

  2. Fiber-Content Measurement of Wool-Cashmere Blends Using Near-Infrared Spectroscopy.

    PubMed

    Zhou, Jinfeng; Wang, Rongwu; Wu, Xiongying; Xu, Bugao

    2017-10-01

    Cashmere and wool are two protein fibers with analogous geometrical attributes, but distinct physical properties. Due to its scarcity and unique features, cashmere is a much more expensive fiber than wool. In the textile production, cashmere is often intentionally blended with fine wool in order to reduce the material cost. To identify the fiber contents of a wool-cashmere blend is important to quality control and product classification. The goal of this study is to develop a reliable method for estimating fiber contents in wool-cashmere blends based on near-infrared (NIR) spectroscopy. In this study, we prepared two sets of cashmere-wool blends by using either whole fibers or fiber snippets in 11 different blend ratios of the two fibers and collected the NIR spectra of all the 22 samples. Of the 11 samples in each set, six were used as a subset for calibration and five as a subset for validation. By referencing the NIR band assignment to chemical bonds in protein, we identified six characteristic wavelength bands where the NIR absorbance powers of the two fibers were significantly different. We then performed the chemometric analysis with two multilinear regression (MLR) equations to predict the cashmere content (CC) in a blended sample. The experiment with these samples demonstrated that the predicted CCs from the MLR models were consistent with the CCs given in the preparations of the two sample sets (whole fiber or snippet), and the errors of the predicted CCs could be limited to 0.5% if the testing was performed over at least 25 locations. The MLR models seem to be reliable and accurate enough for estimating the cashmere content in a wool-cashmere blend and have potential to be used for tackling the cashmere adulteration problem.

  3. Development and Validation of a Spanish Version of the Grit-S Scale

    PubMed Central

    Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.

    2018-01-01

    This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705

  4. Receiving treatment, labor force activity, and work performance among people with psychiatric disorders: results from a population survey.

    PubMed

    Waghorn, Geoffrey; Chant, David

    2011-12-01

    Standard treatments for psychiatric disorders such as schizophrenia, depression and anxiety disorders are generally expected to benefit individuals, employers, and the wider community through improvements in work-functioning and productivity. We repeated a previous secondary investigation of receiving treatment, labor force activity and self-reported work performance among people with ICD-10 psychiatric disorders, in comparison to people with other types of health conditions. Data were collected by the Australian Bureau of Statistics in 2003 repeating a survey administered in 1998 using representative multistage sampling strategies. The 2003 household probability sample consisted of 36,241 working age individuals. Consistent with the previous secondary investigation based on the 1998 survey administration, receiving treatment was consistently associated with non-participation in the labor force, and was negatively associated with work performance. At a population level, receiving treatment was negatively associated with labor force activity and work performance. The stability of these results in two independent surveys highlights the need to investigate the longitudinal relationships between evidence-based treatments for psychiatric conditions as applied in real-world settings, and labor force participation and work performance outcomes.

  5. Measuring African-American parents' cultural mistrust while in a healthcare setting: a pilot study.

    PubMed Central

    Moseley, Kathryn L.; Freed, Gary L.; Bullard, Charrell M.; Goold, Susan D.

    2007-01-01

    BACKGROUND: African Americans' mistrust of healthcare is often cited as a cause of racial disparities in health and has been linked to cultural mistrust. African-American parents' level of cultural mistrust while in a general healthcare setting has not been previously measured. OBJECTIVE: To determine the performance, participant acceptance, feasibility of administration and demographic associations of a measure of cultural mistrust, the Cultural Mistrust Inventory (CMI), in African-American parents seeking healthcare. METHODS: A cross-sectional sample of 69 self-identified African-American parents of minor children recruited in a university-affiliated, urban pediatric/family practice outpatient clinic completed an anonymous, self-administered questionnaire containing demographic items and the CMI. RESULTS: The response rate was 91% (n=63), and 49 (78%) -- answered all questions. Measured mistrust did not vary with gender, insurance or education. The CMI's internal consistency was similar to previously published studies of the instrument (alpha=0.92). Parents indicating discomfort with the CMI's questions reported significantly less mistrust than parents who did not indicate discomfort (p=0.01). CONCLUSIONS: The CMI is feasible to administer in a clinic setting and demonstrates good internal consistency. It can be a useful tool to assess the effect of cultural mistrust on the healthcare decisions African-American parents make for their children. However, when measuring cultural mistrust in a healthcare setting, respondents' comfort with the survey questions should be assessed. PMID:17304964

  6. Co-expression network analysis identified six hub genes in association with metastasis risk and prognosis in hepatocellular carcinoma

    PubMed Central

    Feng, Juerong; Zhou, Rui; Chang, Ying; Liu, Jing; Zhao, Qiu

    2017-01-01

    Hepatocellular carcinoma (HCC) has a high incidence and mortality worldwide, and its carcinogenesis and progression are influenced by a complex network of gene interactions. A weighted gene co-expression network was constructed to identify gene modules associated with the clinical traits in HCC (n = 214). Among the 13 modules, high correlation was only found between the red module and metastasis risk (classified by the HCC metastasis gene signature) (R2 = −0.74). Moreover, in the red module, 34 network hub genes for metastasis risk were identified, six of which (ABAT, AGXT, ALDH6A1, CYP4A11, DAO and EHHADH) were also hub nodes in the protein-protein interaction network of the module genes. Thus, a total of six hub genes were identified. In validation, all hub genes showed a negative correlation with the four-stage HCC progression (P for trend < 0.05) in the test set. Furthermore, in the training set, HCC samples with any hub gene lowly expressed demonstrated a higher recurrence rate and poorer survival rate (hazard ratios with 95% confidence intervals > 1). RNA-sequencing data of 142 HCC samples showed consistent results in the prognosis. Gene set enrichment analysis (GSEA) demonstrated that in the samples with any hub gene highly expressed, a total of 24 functional gene sets were enriched, most of which focused on amino acid metabolism and oxidation. In conclusion, co-expression network analysis identified six hub genes in association with HCC metastasis risk and prognosis, which might improve the prognosis by influencing amino acid metabolism and oxidation. PMID:28430663

  7. PERSONAL CHARACTERISTICS OF OLDER PRIMARY CARE PATIENTS WHO PROVIDE A BUCCAL SWAB FOR APOE TESTING AND BANKING OF GENETIC MATERIAL: THE SPECTRUM STUDY

    PubMed Central

    Bogner, Hillary R.; Wittink, Marsha N.; Merz, Jon F.; Straton, Joseph B.; Cronholm, Peter F.; Rabins, Peter V.; Gallo, Joseph J.

    2009-01-01

    OBJECTIVE To determine the personal characteristics and reasons associated with providing a buccal swab for APOE genetic testing in a primary care study. METHODS The study sample consisted of 342 adults aged 65 years and older recruited from primary care settings. RESULTS In all, 88% of patients agreed to provide a DNA sample for APOE genotyping and 78% of persons providing a sample agreed to banking of the DNA. Persons aged 80 years and older and African-Americans were less likely to participate in APOE genotyping. Concern about confidentiality was the most common reason for not wanting to provide a DNA sample or to have DNA banked. CONCLUSION We found stronger relationships between sociodemographic variables of age and ethnicity with participation in genetic testing than we did between level of educational attainment, gender, function, cognition, and affect. PMID:15692195

  8. Psychometric evaluation of the Revised Professional Practice Environment (RPPE) scale.

    PubMed

    Erickson, Jeanette Ives; Duffy, Mary E; Ditomassi, Marianne; Jones, Dorothy

    2009-05-01

    The purpose was to examine the psychometric properties of the Revised Professional Practice Environment (RPPE) scale. Despite renewed focus on studying health professionals' practice environments, there are still few reliable and valid instruments available to assist nurse administrators in decision making. A psychometric evaluation using a random-sample cross-validation procedure (calibration sample [CS], n = 775; validation sample [VS], n = 775) was undertaken. Cronbach alpha internal consistency reliability of the total score (r = 0.93 [CS] and 0.92 [VS]), resulting subscale scores (r range: 0.80-0.87 [CS], 0.81-0.88 [VS]), and principal components analyses with Varimax rotation and Kaiser normalization (8 components, 59.2% variance [CS], 59.7% [VS]) produced almost identical results in both samples. The multidimensional RPPE is a psychometrically sound measure of 8 components of the professional practice environment in the acute care setting and sufficiently reliable and valid for use as independent subscales in healthcare research.

  9. Studies in astronomical time series analysis. III - Fourier transforms, autocorrelation functions, and cross-correlation functions of unevenly spaced data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1989-01-01

    This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.

  10. Assessing Intimate Relationships of Chinese Couples in Taiwan Using the Marital Satisfaction Inventory-Revised.

    PubMed

    Lou, Yu-Chiung; Lin, Chien-Heng; Chen, Chien-Min; Balderrama-Durbin, Christina; Snyder, Douglas K

    2016-06-01

    The current study examined the psychometric characteristics of the Chinese translation of the Marital Satisfaction Inventory-Revised (MSI-R) in a community sample of 117 couples from Taiwan. The Chinese MSI-R demonstrated moderate to strong internal consistency. Confirmatory factor analysis revealed similar scale factor structures in the Taiwanese and U.S. standardization samples. Mean profile comparisons between the current Taiwanese sample and the original MSI-R standardization sample revealed statistically significant but small differences on several subscales. Overall, the psychometric characteristics of the Chinese MSI-R lend support to its use with couples from diverse cultural backgrounds whose sole or preferred language is Chinese. It may also be appropriate to use the MSI-R in clinical settings for prevention or intervention efforts directed at Chinese-speaking couples. The implications of these findings for clinical and research purposes are discussed. © The Author(s) 2015.

  11. Development and Preliminary Validation of Refugee Trauma History Checklist (RTHC)—A Brief Checklist for Survey Studies

    PubMed Central

    Gottvall, Maria; Vaez, Marjan

    2017-01-01

    A high proportion of refugees have been subjected to potentially traumatic experiences (PTEs), including torture. PTEs, and torture in particular, are powerful predictors of mental ill health. This paper reports the development and preliminary validation of a brief refugee trauma checklist applicable for survey studies. Methods: A pool of 232 items was generated based on pre-existing instruments. Conceptualization, item selection and item refinement was conducted based on existing literature and in collaboration with experts. Ten cognitive interviews using a Think Aloud Protocol (TAP) were performed in a clinical setting, and field testing of the proposed checklist was performed in a total sample of n = 137 asylum seekers from Syria. Results: The proposed refugee trauma history checklist (RTHC) consists of 2 × 8 items, concerning PTEs that occurred before and during the respondents’ flight, respectively. Results show low item non-response and adequate psychometric properties Conclusions: RTHC is a usable tool for providing self-report data on refugee trauma history surveys of community samples. The core set of included events can be augmented and slight modifications can be applied to RTHC for use also in other refugee populations and settings. PMID:28976937

  12. Psychometric Evaluation of the MMPI-2/MMPI-2-RF Restructured Clinical Scales in an Israeli Sample.

    PubMed

    Shkalim, Eleanor

    2015-10-01

    The current study cross-culturally evaluated the psychometric properties of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2)/MMPI-2-Restructured Form Restructured Clinical (RC) Scales in psychiatric settings in Israel with a sample of 100 men and 133 women. Participants were administered the MMPI-2 and were rated by their therapists on a 188-item Patient Description Form. Results indicated that in most instances the RC Scales demonstrated equivalent or better internal consistencies and improved intercorrelation patterns relative to their clinical counterparts. Furthermore, external analyses revealed comparable or improved convergent validity (with the exceptions of Antisocial Behavior [RC4] and Ideas of Persecution [RC6] among men), and mostly greater discriminant validity. Overall, the findings indicate that consistent with previous findings, the RC Scales generally exhibit comparable to improved psychometric properties over the Clinical Scales. Implications of the results, limitations, and recommendations for future research are discussed. © The Author(s) 2014.

  13. A new passive radon-thoron discriminative measurement system.

    PubMed

    Sciocchetti, G; Sciocchetti, A; Giovannoli, P; DeFelice, P; Cardellini, F; Cotellessa, G; Pagliari, M

    2010-10-01

    A new passive radon-thoron discriminative measurement system has been developed for monitoring radon and thoron individually. It consists of a 'couple' of passive integrating devices with a CR39 nuclear track detector (NTD). The experimental prototype is based on the application of a new concept of NTD instrument developed at ENEA, named Alpha-PREM, acronym of piston radon exposure meter, which allows controlling the detector exposure with a patented sampling technique (Int. Eu. Pat. and US Pat.). The 'twin diffusion chambers system' was based on two A-PREM devices consisting of the standard device, named NTD-Rn, and a modified version, named NTD-Rn/Tn, which was set up to improve thoron sampling efficiency of the diffusion chamber, without changing the geometry and the start/stop function of the NTD-Rn device. Coupling devices fitted on each device allowed getting a system, which works as a double-chamber structure when deployed at the monitoring position. In this paper both technical and physical aspects are considered.

  14. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  15. Recent and Past Intimate Partner Abuse and HIV Risk Among Young Women

    PubMed Central

    Teitelman, Anne M.; Ratcliffe, Sarah J.; Dichter, Melissa E.; Sullivan, Cris M.

    2011-01-01

    Objective To examine the associations between past intimate partner abuse experienced during adolescence (verbal and physical), recent intimate partner abuse (verbal, physical, and sexual), and HIV risk (as indicated by lack of condom use) for sexually active young adult women in relationships with male partners. Design Secondary data analysis of waves II and III of the National Longitudinal Study of Adolescent Health (Add Health). Setting The Add Health Study is a longitudinal, in-home survey of a nationally representative sample of adolescents. Sample Analyses involved 2,058 sexually active young adult women. Main Outcome Measures HIV risk was measured by consistent condom use over the past 12 months. Results Physical and verbal abuse experienced in adolescence were associated with physical/verbal abuse experienced in young adulthood. Young, sexually active women experiencing no abuse in their relationships were more likely to consistently use condoms in the past 12 months than were their abused counterparts. Conclusion A causal pathway may exist between prior abuse, current abuse, and HIV risk. PMID:18336447

  16. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  17. A microfluidics-based technique for automated and rapid labeling of cells for flow cytometry

    NASA Astrophysics Data System (ADS)

    Patibandla, Phani K.; Estrada, Rosendo; Kannan, Manasaa; Sethu, Palaniappan

    2014-03-01

    Flow cytometry is a powerful technique capable of simultaneous multi-parametric analysis of heterogeneous cell populations for research and clinical applications. In recent years, the flow cytometer has been miniaturized and made portable for application in clinical- and resource-limited settings. The sample preparation procedure, i.e. labeling of cells with antibodies conjugated to fluorescent labels, is a time consuming (˜45 min) and labor-intensive procedure. Microfluidics provides enabling technologies to accomplish rapid and automated sample preparation. Using an integrated microfluidic device consisting of a labeling and washing module, we demonstrate a new protocol that can eliminate sample handling and accomplish sample and reagent metering, high-efficiency mixing, labeling and washing in rapid automated fashion. The labeling module consists of a long microfluidic channel with an integrated chaotic mixer. Samples and reagents are precisely metered into this device to accomplish rapid and high-efficiency mixing. The mixed sample and reagents are collected in a holding syringe and held for up to 8 min following which the mixture is introduced into an inertial washing module to obtain ‘analysis-ready’ samples. The washing module consists of a high aspect ratio channel capable of focusing cells to equilibrium positions close to the channel walls. By introducing the cells and labeling reagents in a narrow stream at the center of the channel flanked on both sides by a wash buffer, the elution of cells into the wash buffer away from the free unbound antibodies is accomplished. After initial calibration experiments to determine appropriate ‘holding time’ to allow antibody binding, both modules were used in conjunction to label MOLT-3 cells (T lymphoblast cell line) with three different antibodies simultaneously. Results confirm no significant difference in mean fluorescence intensity values for all three antibodies labels (p < 0.01) between the conventional procedure (45 min) and our microfluidic approach (12 min).

  18. DNA barcode identification of black cohosh herbal dietary supplements.

    PubMed

    Baker, David A; Stevenson, Dennis W; Little, Damon P

    2012-01-01

    Black cohosh (Actaea racemosa) herbal dietary supplements are commonly consumed to treat menopausal symptoms, but there are reports of adverse events and toxicities associated with their use. Accidental misidentification and/or deliberate adulteration results in harvesting other related species that are then marketed as black cohosh. Some of these species are known to be toxic to humans. We have identified two matK nucleotides that consistently distinguish black cohosh from related species. Using these nucleotides, an assay was able to correctly identify all of the black cohosh samples in the validation set. None of the other Actaea species in the validation set were falsely identified as black cohosh. Of 36 dietary supplements sequenced, 27 (75%) had a sequence that exactly matched black cohosh. The remaining nine samples (25%) had a sequence identical to that of three Asian Actaea species (A. cimicifuga, A. dahurica, and A. simplex). Manufacturers should routinely test plant material using a reliable assay to ensure accurate labeling.

  19. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  20. Constraints on Inner Core Anisotropy Using Array Observations of P'P'

    NASA Astrophysics Data System (ADS)

    Frost, Daniel A.; Romanowicz, Barbara

    2017-11-01

    Recent studies of PKPdf travel times suggest strong anisotropy (4% or more) in the quasi-western inner core hemisphere. However, the availability of paths sampling at low angles to the Earth's rotation axis (the fast axis) is limited. To augment this sampling, we collected a travel time data set for the phase P'P'df (PKPPKPdf), for which at least one inner core leg is quasi-polar, at two high latitude seismic arrays. We find that the inferred anisotropy is weak (on the order of 0.5 to 1.5%), confirming previous results based on a much smaller P'P' data set. While previous models of inner core anisotropy required very strong alignment of anisotropic iron grains, our results are more easily explained by current dynamic models of inner core growth. We observe large travel time anomalies when one leg of P'P'df is along the South Sandwich to Alaska path, consistent with PKPdf observations, and warranting further investigation.

  1. Sampling solution traces for the problem of sorting permutations by signed reversals

    PubMed Central

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results show that, for testable-sized permutations, the algorithms DFALT and SWA produce distributions which approximate the reversal length distributions observed with a complete enumeration of the set of traces. PMID:22704580

  2. Constraining Ω0 with the Angular Size-Redshift Relation of Double-lobed Quasars in the FIRST Survey

    NASA Astrophysics Data System (ADS)

    Buchalter, Ari; Helfand, David J.; Becker, Robert H.; White, Richard L.

    1998-02-01

    In previous attempts to measure cosmological parameters from the angular size-redshift (θ-z) relation of double-lobed radio sources, the observed data have generally been consistent with a static Euclidean universe rather than with standard Friedmann models, and past authors have disagreed significantly as to what effects are responsible for this observation. These results and different interpretations may be due largely to a variety of selection effects and differences in the sample definitions destroying the integrity of the data sets, and inconsistencies in the analysis undermining the results. Using the VLA FIRST survey, we investigate the θ-z relation for a new sample of double-lobed quasars. We define a set of 103 sources, carefully addressing the various potential problems that, we believe, have compromised past work, including a robust definition of size and the completeness and homogeneity of the sample, and further devise a self-consistent method to assure accurate morphological classification and account for finite resolution effects in the analysis. Before focusing on cosmological constraints, we investigate the possible impact of correlations among the intrinsic properties of these sources over the entire assumed range of allowed cosmological parameter values. For all cases, we find apparent size evolution of the form l ~ (1 + z)c, with c ~ -0.8 +/- 0.4, which is found to arise mainly from a power-size correlation of the form l ~ Pβ (β ~ - 0.13 +/- 0.06) coupled with a power-redshift correlation. Intrinsic size evolution is consistent with zero. We also find that in all cases, a subsample with c ~ 0 can be defined, whose θ-z relation should therefore arise primarily from cosmological effects. These results are found to be independent of orientation effects, although other evidence indicates that orientation effects are present and consistent with predictions of the unified scheme for radio-loud active galactic nuclei. The above results are all confirmed by nonparametric analysis. Contrary to past work, we find that the observed θ-z relation for our sample is more consistent with standard Friedmann models than with a static Euclidean universe. Though the current data cannot distinguish with high significance between various Friedmann models, significant constraints on the cosmological parameters within a given model are obtained. In particular, we find that a flat, matter-dominated universe (Ω0 = 1), a flat universe with a cosmological constant, and an open universe all provide comparably good fits to the data, with the latter two models both yielding Ω0 ~ 0.35 with 1 σ ranges including values between ~0.25 and 1.0; the c ~ 0 subsamples yield values of Ω0 near unity in these models, though with even greater error ranges. We also examine the values of H0 implied by the data, using plausible assumptions about the intrinsic source sizes, and find these to be consistent with the currently accepted range of values. We determine the sample size needed to improve significantly the results and outline future strategies for such work.

  3. Quantitative Assessment of Molecular Dynamics Sampling for Flexible Systems.

    PubMed

    Nemec, Mike; Hoffmann, Daniel

    2017-02-14

    Molecular dynamics (MD) simulation is a natural method for the study of flexible molecules but at the same time is limited by the large size of the conformational space of these molecules. We ask by how much the MD sampling quality for flexible molecules can be improved by two means: the use of diverse sets of trajectories starting from different initial conformations to detect deviations between samples and sampling with enhanced methods such as accelerated MD (aMD) or scaled MD (sMD) that distort the energy landscape in controlled ways. To this end, we test the effects of these approaches on MD simulations of two flexible biomolecules in aqueous solution, Met-Enkephalin (5 amino acids) and HIV-1 gp120 V3 (a cycle of 35 amino acids). We assess the convergence of the sampling quantitatively with known, extensive measures of cluster number N c and cluster distribution entropy S c and with two new quantities, conformational overlap O conf and density overlap O dens , both conveniently ranging from 0 to 1. These new overlap measures quantify self-consistency of sampling in multitrajectory MD experiments, a necessary condition for converged sampling. A comprehensive assessment of sampling quality of MD experiments identifies the combination of diverse trajectory sets and aMD as the most efficient approach among those tested. However, analysis of O dens between conventional and aMD trajectories also reveals that we have not completely corrected aMD sampling for the distorted energy landscape. Moreover, for V3, the courses of N c and O dens indicate that much higher resources than those generally invested today will probably be needed to achieve convergence. The comparative analysis also shows that conventional MD simulations with insufficient sampling can be easily misinterpreted as being converged.

  4. The measurement of childbearing motivation in couples considering the use of assisted reproductive technology.

    PubMed

    Miller, Warren B; Millstein, Susan G; Pasta, David J

    2008-01-01

    Relatively little is known about the motivational antecedents to the use of assisted reproductive technology (ART). In this paper we measure the fertility motivations of infertile couples who are considering the use of ART, using an established instrument, the Childbearing Questionnaire (CBQ). Our sample consists of 214 men and 216 women who were interviewed at home after an initial screening for ART but before making a final decision. We conducted two sets of analyses with the obtained data. In one set, we compared the scores on scales and subscales of the CBQ for the males and females in our sample with the scores for males and females from a comparable normative sample. For these analyses we first examined sample and gender differences with a four-group analysis of variance. We then conducted a series of linear models that included background characteristics as covariates and interactions between sample, gender, and age and between those three variables and the background characteristics. The results showed the expected higher positive and lower negative motivations in the ART sample and a significant effect on positive motivations of the interaction between sample and age. In the second set of analyses, we developed several new subscales relevant to facets of the desire for a child that appear to be important in ART decision-making. These facets include the desire to be genetically related to the child and the desire to experience pregnancy and childbirth. A third facet, the desire for parenthood, is already well covered by the existing subscales. The results showed the new subscales to have satisfactory reliability and validity. The results also showed that the original and new subscales predicted the three facets of the desire for a child in a multivariate context. We conclude with a general discussion of the way our findings relate both to ART decision-making and to further research on the motivations that drive it.

  5. Preparation, certification and interlaboratory analysis of workplace air filters spiked with high-fired beryllium oxide.

    PubMed

    Oatts, Thomas J; Hicks, Cheryl E; Adams, Amy R; Brisson, Michael J; Youmans-McDonald, Linda D; Hoover, Mark D; Ashley, Kevin

    2012-02-01

    Occupational sampling and analysis for multiple elements is generally approached using various approved methods from authoritative government sources such as the National Institute for Occupational Safety and Health (NIOSH), the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), as well as consensus standards bodies such as ASTM International. The constituents of a sample can exist as unidentified compounds requiring sample preparation to be chosen appropriately, as in the case of beryllium in the form of beryllium oxide (BeO). An interlaboratory study was performed to collect analytical data from volunteer laboratories to examine the effectiveness of methods currently in use for preparation and analysis of samples containing calcined BeO powder. NIST SRM(®) 1877 high-fired BeO powder (1100 to 1200 °C calcining temperature; count median primary particle diameter 0.12 μm) was used to spike air filter media as a representative form of beryllium particulate matter present in workplace sampling that is known to be resistant to dissolution. The BeO powder standard reference material was gravimetrically prepared in a suspension and deposited onto 37 mm mixed cellulose ester air filters at five different levels between 0.5 μg and 25 μg of Be (as BeO). Sample sets consisting of five BeO-spiked filters (in duplicate) and two blank filters, for a total of twelve unique air filter samples per set, were submitted as blind samples to each of 27 participating laboratories. Participants were instructed to follow their current process for sample preparation and utilize their normal analytical methods for processing samples containing substances of this nature. Laboratories using more than one sample preparation and analysis method were provided with more than one sample set. Results from 34 data sets ultimately received from the 27 volunteer laboratories were subjected to applicable statistical analyses. The observed performance data show that sample preparations using nitric acid alone, or combinations of nitric and hydrochloric acids, are not effective for complete extraction of Be from the SRM 1877 refractory BeO particulate matter spiked on air filters; but that effective recovery can be achieved by using sample preparation procedures utilizing either sulfuric or hydrofluoric acid, or by using methodologies involving ammonium bifluoride with heating. Laboratories responsible for quantitative determination of Be in workplace samples that may contain high-fired BeO should use quality assurance schemes that include BeO-spiked sampling media, rather than solely media spiked with soluble Be compounds, and should ensure that methods capable of quantitative digestion of Be from the actual material present are used.

  6. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  7. Towards a sampling strategy for the assessment of forest condition at European level: combining country estimates.

    PubMed

    Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo

    2013-04-01

    A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.

  8. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  9. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  10. Psychometric properties of the Brunel Mood Scale in Chinese adolescents and adults.

    PubMed

    Zhang, Chun-Qing; Si, Gangyan; Chung, Pak-Kwong; Du, Mengmeng; Terry, Peter C

    2014-01-01

    Building on the work of Terry and colleagues (Terry, P. C., Lane, A. M., Lane, H. J., & Keohane, L. (1999). Development and validation of a mood measure for adolescents. Journal of Sports Sciences, 17, 861-872; Terry, P. C., Lane, A. M., & Fogarty, G. J. (2003). Construct validity of the Profile of Mood States-Adolescents for use with adults. Psychology of Sport & Exercise, 4, 125-139.), the present study examined the validity and internal consistency reliability of the Chinese version of the Brunel Mood Scale (BRUMS-C) among 2,548 participants, comprising adolescent athletes (n = 520), adult athletes (n = 434), adolescent students (n = 673), and adult students (n = 921). Both adolescent and adult athletes completed the BRUMS-C before, during, or after regular training and both adolescent and adult students completed the BRUMS-C in a classroom setting. Confirmatory factor analyses (CFAs) provided support for the factorial validity of a 23-item six-factor model, with one item removed from the hypothesised measurement model. Internal consistency reliabilities were satisfactory for all subscales across each of the four samples. Criterion validity was supported with strong relationships between the BRUMS-C, abbreviated POMS, and Chinese Affect Scale consistent with theoretical predictions. Multi-sample CFAs showed the BRUMS-C to be invariant at the configural, metric, strong, and structural levels for all samples. Furthermore, latent mean difference analyses showed that athletes reported significantly higher levels of fatigue than students while maintaining almost the same levels of vigour, and adolescent students reported significantly higher levels of depressed mood than the other three samples.

  11. Effect of Tricalcium Aluminate on the Physicochemical Properties, Bioactivity, and Biocompatibility of Partially Stabilized Cements

    PubMed Central

    Chang, Kai-Chun; Chang, Chia-Chieh; Huang, Ying-Chieh; Chen, Min-Hua; Lin, Feng-Huei; Lin, Chun-Pin

    2014-01-01

    Background/Purpose Mineral Trioxide Aggregate (MTA) was widely used as a root-end filling material and for vital pulp therapy. A significant disadvantage to MTA is the prolonged setting time has limited the application in endodontic treatments. This study examined the physicochemical properties and biological performance of novel partially stabilized cements (PSCs) prepared to address some of the drawbacks of MTA, without causing any change in biological properties. PSC has a great potential as the vital pulp therapy material in dentistry. Methods This study examined three experimental groups consisting of samples that were fabricated using sol-gel processes in C3S/C3A molar ratios of 9/1, 7/3, and 5/5 (denoted as PSC-91, PSC-73, and PSC-55, respectively). The comparison group consisted of MTA samples. The setting times, pH variation, compressive strength, morphology, and phase composition of hydration products and ex vivo bioactivity were evaluated. Moreover, biocompatibility was assessed by using lactate dehydrogenase to determine the cytotoxicity and a cell proliferation (WST-1) assay kit to determine cell viability. Mineralization was evaluated using Alizarin Red S staining. Results Crystalline phases, which were determined using X-ray diffraction analysis, confirmed that the C3A contents of the material powder differed. The initial setting times of PSC-73 and PSC-55 ranged between 15 and 25 min; these values are significantly (p<0.05, ANOVA and post-hoc test) lower than those obtained for MTA (165 min) and PSC-91 (80.5 min). All of the PSCs exhibited ex vivo bioactivity when immersed in simulated body fluid. The biocompatibility results for all of the tested cements were as favorable as those of the negative control, except for PSC-55, which exhibited mild cytotoxicity. Conclusion PSC-91 is a favorable material for vital pulp therapy because it exhibits optimal compressive strength, a short setting time, and high biocompatibility and bioactivity. PMID:25247808

  12. The Validity of a New Structured Assessment of Gastrointestinal Symptoms Scale (SAGIS) for Evaluating Symptoms in the Clinical Setting.

    PubMed

    Koloski, N A; Jones, M; Hammer, J; von Wulffen, M; Shah, A; Hoelz, H; Kutyla, M; Burger, D; Martin, N; Gurusamy, S R; Talley, N J; Holtmann, G

    2017-08-01

    The clinical assessments of patients with gastrointestinal symptoms can be time-consuming, and the symptoms captured during the consultation may be influenced by a variety of patient and non-patient factors. To facilitate standardized symptom assessment in the routine clinical setting, we developed the Structured Assessment of Gastrointestinal Symptom (SAGIS) instrument to precisely characterize symptoms in a routine clinical setting. We aimed to validate SAGIS including its reliability, construct and discriminant validity, and utility in the clinical setting. Development of the SAGIS consisted of initial interviews with patients referred for the diagnostic work-up of digestive symptoms and relevant complaints identified. The final instrument consisted of 22 items as well as questions on extra intestinal symptoms and was given to 1120 consecutive patients attending a gastroenterology clinic randomly split into derivation (n = 596) and validation datasets (n = 551). Discriminant validity along with test-retest reliability was assessed. The time taken to perform a clinical assessment with and without the SAGIS was recorded along with doctor satisfaction with this tool. Exploratory factor analysis conducted on the derivation sample suggested five symptom constructs labeled as abdominal pain/discomfort (seven items), gastroesophageal reflux disease/regurgitation symptoms (four items), nausea/vomiting (three items), diarrhea/incontinence (five items), and difficult defecation and constipation (2 items). Confirmatory factor analysis conducted on the validation sample supported the initially developed five-factor measurement model ([Formula: see text], p < 0.0001, χ 2 /df = 4.6, CFI = 0.90, TLI = 0.88, RMSEA = 0.08). All symptom groups demonstrated differentiation between disease groups. The SAGIS was shown to be reliable over time and resulted in a 38% reduction of the time required for clinical assessment. The SAGIS instrument has excellent psychometric properties and supports the clinical assessment of and symptom-based categorization of patients with a wide spectrum of gastrointestinal symptoms.

  13. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  14. The development of miniplex primer sets for the analysis of degraded DNA

    NASA Astrophysics Data System (ADS)

    McCord, Bruce; Opel, Kerry; Chung, Denise; Drabek, Jiri; Tatarek, Nancy; Meadows Jantz, Lee; Butler, John

    2005-05-01

    In this project, a new set of multiplexed PCR reactions has been developed for the analysis of degraded DNA. These DNA markers, known as Miniplexes, utilize primers that have shorter amplicons for use in short tandem repeat (STR) analysis of degraded DNA. In our work we have defined six of these new STR multiplexes, each of which consists of 3 to 4 reduced size STR loci, and each labeled with a different fluorescent dye. When compared to commercially available STR systems, reductions in size of up to 300 basepairs are possible. In addition, these newly designed amplicons consist of loci that are fully compatible with the the national computer DNA database known as CODIS. To demonstrate compatibility with commercial STR kits, a concordance study of 532 DNA samples of Caucasian, African American, and Hispanic origin was undertaken There was 99.77% concordance between allele calls with the two methods. Of these 532 samples, only 15 samples showed discrepancies at one of 12 loci. These occurred predominantly at 2 loci, vWA and D13S317. DNA sequencing revealed that these locations had deletions between the two primer binding sites. Uncommon deletions like these can be expected in certain samples and will not affect the utility of the Miniplexes as tools for degraded DNA analysis. The Miniplexes were also applied to enzymatically digested DNA to assess their potential in degraded DNA analysis. The results demonstrated a greatly improved efficiency in the analysis of degraded DNA when compared to commercial STR genotyping kits. A series of human skeletal remains that had been exposed to a variety of environmental conditions were also examined. Sixty-four percent of the samples generated full profiles when amplified with the Miniplexes, while only sixteen percent of the samples tested generated full profiles with a commercial kit. In addition, complete profiles were obtained for eleven of the twelve Miniplex loci which had amplicon size ranges less than 200 base pairs. These data clearly demonstrate that smaller PCR amplicons provide an attractive alternative to mitochondrial DNA for forensic analysis of degraded DNA.

  15. Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments

    NASA Astrophysics Data System (ADS)

    Shaar, Ron; Tauxe, Lisa

    2013-03-01

    Thellier-type experiments are a method used to estimate the intensity of the ancient geomagnetic field from samples carrying thermoremanent magnetization. The analysis of Thellier-type experimental data is conventionally done by manually interpreting data from each specimen individually. The main limitations of this approach are: (1) manual interpretation is highly subjective and can be biased by misleading concepts, (2) the procedure is time consuming, and (3) unless the measurement data are published, the final results cannot be reproduced by readers. These issues compound when trying to combine together paleointensity data from a collection of studies. Here, we address these problems by introducing the Thellier GUI: a comprehensive tool for interpreting Thellier-type experimental data. The tool presents a graphical user interface, which allows manual interpretation of the data, but also includes two new interpretation tools: (1) Thellier Auto Interpreter: an automatic interpretation procedure based on a given set of experimental requirements, and 2) Consistency Test: a self-test for the consistency of the results assuming groups of samples that should have the same paleointensity values. We apply the new tools to data from two case studies. These demonstrate that interpretation of non-ideal Arai plots is nonunique and different selection criteria can lead to significantly different conclusions. Hence, we recommend adopting the automatic interpretation approach, as it allows a more objective interpretation, which can be easily repeated or revised by others. When the analysis is combined with a Consistency Test, the credibility of the interpretations is enhanced. We also make the case that published paleointensity studies should include the measurement data (as supplementary files or as a contributions to the MagIC database) so that results based on a particular data set can be reproduced and assessed by others.

  16. Predictions of LDEF radioactivity and comparison with measurements

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Harmon, B. A.; Laird, C. E.

    1995-01-01

    As part of the program to utilize LDEF data for evaluation and improvement of current ionizing radiation environmental models and related predictive methods for future LEO missions, calculations have been carried out to compare with the induced radioactivity measured in metal samples placed on LDEF. The predicted activation is about a factor of two lower than observed, which is attributed to deficiencies in the AP8 trapped proton model. It is shown that this finding based on activation sample data is consistent with comparisons made with other LDEF activation and dose data. Plans for confirming these results utilizing additional LDEF data sets, and plans for model modifications to improve the agreement with LDEF data, are discussed.

  17. Construction and performance of a dilution-refrigerator based spectroscopic-imaging scanning tunneling microscope.

    PubMed

    Singh, U R; Enayat, M; White, S C; Wahl, P

    2013-01-01

    We report on the set-up and performance of a dilution-refrigerator based spectroscopic imaging scanning tunneling microscope. It operates at temperatures below 10 mK and in magnetic fields up to 14T. The system allows for sample transfer and in situ cleavage. We present first-results demonstrating atomic resolution and the multi-gap structure of the superconducting gap of NbSe(2) at base temperature. To determine the energy resolution of our system we have measured a normal metal/vacuum/superconductor tunneling junction consisting of an aluminum tip on a gold sample. Our system allows for continuous measurements at base temperature on time scales of up to ≈170 h.

  18. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  19. The Relevance of Big Five Trait Content in Behavior to Subjective Authenticity: Do High Levels of Within-Person Behavioral Variability Undermine or Enable Authenticity Achievement?

    PubMed Central

    Fleeson, William; Wilt, Joshua

    2010-01-01

    Individuals vary their behavior from moment to moment a great deal, often acting “out of character” for their traits. This article investigates the consequences for authenticity. We compared two hypotheses—trait consistency: individuals feel most authentic when acting in a way consistent with their traits; and state-content significance: some ways of acting feel more authentic because of their content and consequences, regardless of the actor’s corresponding traits. Three studies using experience-sampling methodology in laboratory and natural settings, with participants aged 18–51, strongly supported the state-content significance hypothesis and did not support the trait-consistency hypothesis. Authenticity was consistently associated with acting highly extraverted, agreeable, conscientious, emotionally stable, and intellectual, regardless of the actor’s traits. Discussion focuses on possible implications for within-person variability in behavior and for the nature of the self-concept. PMID:20545814

  20. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    NASA Astrophysics Data System (ADS)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement, replicate measurements are feasible, allowing the method precision to be improved potentially. Further, new analytical approaches are introduced for the accurate correction of the procedural blank and for a consistent detection of measurement outliers, which is based on δ18O-CO2 and the exchange of oxygen between CO2 and the surrounding ice (H2O).

  1. The Kormendy relation of galaxies in the Frontier Fields clusters: Abell S1063 and MACS J1149.5+2223

    NASA Astrophysics Data System (ADS)

    Tortorelli, Luca; Mercurio, Amata; Paolillo, Maurizio; Rosati, Piero; Gargiulo, Adriana; Gobat, Raphael; Balestra, Italo; Caminha, G. B.; Annunziatella, Marianna; Grillo, Claudio; Lombardi, Marco; Nonino, Mario; Rettura, Alessandro; Sartoris, Barbara; Strazzullo, Veronica

    2018-06-01

    We analyse the Kormendy relations (KRs) of the two Frontier Fields clusters, Abell S1063, at z = 0.348, and MACS J1149.5+2223, at z = 0.542, exploiting very deep Hubble Space Telescope photometry and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopy. With this novel data set, we are able to investigate how the KR parameters depend on the cluster galaxy sample selection and how this affects studies of galaxy evolution based on the KR. We define and compare four different galaxy samples according to (a) Sérsic indices: early-type (`ETG'), (b) visual inspection: `ellipticals', (c) colours: `red', (d) spectral properties: `passive'. The classification is performed for a complete sample of galaxies with mF814W ≤ 22.5 ABmag (M* ≳ 1010.0 M⊙). To derive robust galaxy structural parameters, we use two methods: (1) an iterative estimate of structural parameters using images of increasing size, in order to deal with closely separated galaxies and (2) different background estimations, to deal with the intracluster light contamination. The comparison between the KRs obtained from the different samples suggests that the sample selection could affect the estimate of the best-fitting KR parameters. The KR built with ETGs is fully consistent with the one obtained for ellipticals and passive. On the other hand, the KR slope built on the red sample is only marginally consistent with those obtained with the other samples. We also release the photometric catalogue with structural parameters for the galaxies included in the present analysis.

  2. Estimating time-dependent ROC curves using data under prevalent sampling.

    PubMed

    Li, Shanshan

    2017-04-15

    Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Comet nucleus sample return mission

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A comet nucleus sample return mission in terms of its relevant science objectives, candidate mission concepts, key design/technology requirements, and programmatic issues is discussed. The primary objective was to collect a sample of undisturbed comet material from beneath the surface of an active comet and to preserve its chemical and, if possible, its physical integrity and return it to Earth in a minimally altered state. The secondary objectives are to: (1) characterize the comet to a level consistent with a rendezvous mission; (2) monitor the comet dynamics through perihelion and aphelion with a long lived lander; and (3) determine the subsurface properties of the nucleus in an area local to the sampled core. A set of candidate comets is discussed. The hazards which the spacecraft would encounter in the vicinity of the comet are also discussed. The encounter strategy, the sampling hardware, the thermal control of the pristine comet material during the return to Earth, and the flight performance of various spacecraft systems and the cost estimates of such a mission are presented.

  4. The effectiveness of multi-component goal setting interventions for changing physical activity behaviour: a systematic review and meta-analysis.

    PubMed

    McEwan, Desmond; Harden, Samantha M; Zumbo, Bruno D; Sylvester, Benjamin D; Kaulius, Megan; Ruissen, Geralyn R; Dowd, A Justine; Beauchamp, Mark R

    2016-01-01

    Drawing from goal setting theory (Latham & Locke, 1991; Locke & Latham, 2002; Locke et al., 1981), the purpose of this study was to conduct a systematic review and meta-analysis of multi-component goal setting interventions for changing physical activity (PA) behaviour. A literature search returned 41,038 potential articles. Included studies consisted of controlled experimental trials wherein participants in the intervention conditions set PA goals and their PA behaviour was compared to participants in a control group who did not set goals. A meta-analysis was ultimately carried out across 45 articles (comprising 52 interventions, 126 effect sizes, n = 5912) that met eligibility criteria using a random-effects model. Overall, a medium, positive effect (Cohen's d(SE) = .552(.06), 95% CI = .43-.67, Z = 9.03, p < .001) of goal setting interventions in relation to PA behaviour was found. Moderator analyses across 20 variables revealed several noteworthy results with regard to features of the study, sample characteristics, PA goal content, and additional goal-related behaviour change techniques. In conclusion, multi-component goal setting interventions represent an effective method of fostering PA across a diverse range of populations and settings. Implications for effective goal setting interventions are discussed.

  5. Individualization of pubic hair bacterial communities and the effects of storage time and temperature.

    PubMed

    Williams, Diana W; Gibson, Greg

    2017-01-01

    A potential application of microbial genetics in forensic science is detection of transfer of the pubic hair microbiome between individuals during sexual intercourse using high-throughput sequencing. In addition to the primary need to show whether the pubic hair microbiome is individualizing, one aspect that must be addressed before using the microbiome in criminal casework involves the impact of storage on the microbiome of samples recovered for forensic testing. To test the effects of short-term storage, pubic hair samples were collected from volunteers and stored at room temperature (∼20°C), refrigerated (4°C), and frozen (-20°C) for 1 week, 2 weeks, 4 weeks, and 6 weeks along with a baseline sample. Individual microbial profiles (R 2 =0.69) and gender (R 2 =0.17) were the greatest sources of variation between samples. Because of this variation, individual and gender could be predicted using Random Forests supervised classification in this sample set with an overall error rate of 2.7%± 5.8% and 1.7%±5.2%, respectively. There was no statistically significant difference attributable to time of sampling or temperature of storage within individuals. Further work on larger sample sets will quantify the temporal consistency of individual profiles and define whether it is plausible to detect transfer between sexual partners. For short-term storage (≤6 weeks), recovery of the microbiome was not affected significantly by either storage time or temperature, suggesting that investigators and crime laboratories can use existing evidence storage methods. Published by Elsevier Ireland Ltd.

  6. Analysis of NASA Common Research Model Dynamic Data

    NASA Technical Reports Server (NTRS)

    Balakrishna, S.; Acheson, Michael J.

    2011-01-01

    Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.

  7. Comparison of the petrography, palynology, and paleobotany of the Little Fire Creek coal bed, southwestern Virginia, USA

    USGS Publications Warehouse

    Pierce, B.S.; Eble, C.F.; Stanton, R.W.

    1995-01-01

    The proximate, petrographic, palynologic, and plant tissue data from two sets of samples indicate a high ash, gelocollinite- and liptinite-rich coal consisting of a relatively diverse paleoflora, including lycopsid trees, small lycopsids, tree ferns, small ferns, pteridosperms, and rare calamites and cordaites. The relatively very high ash yields the relatively thin subunits and the large scale vertical variations in palynomorph floras suggest that the study area was at the edge of the paleopeat-forming environment. -from Authors

  8. Consistency of Use and Effectiveness of Household Water Treatment among Indian Households Claiming to Treat Their Water.

    PubMed

    Rosa, Ghislaine; Clasen, Thomas

    2017-07-01

    Household water treatment (HWT) can improve drinking water quality and prevent disease if used correctly and consistently by populations at risk. Current international monitoring estimates by the Joint Monitoring Programme for water and sanitation suggest that at least 1.1 billion people practice HWT. These estimates, however, are based on surveys that may overstate the level of consistent use and do not address microbial effectiveness. We sought to assess how HWT is practiced among households identified as HWT users according to these monitoring standards. After a baseline survey (urban: 189 households, rural: 210 households) to identify HWT users, 83 urban and 90 rural households were followed up for 6 weeks. Consistency of reported HWT practices was high in both urban (100%) and rural (93.3%) settings, as was availability of treated water (based on self-report) in all three sampling points (urban: 98.8%, rural: 76.0%). Nevertheless, only 13.7% of urban and 25.8% of rural households identified at baseline as users of adequate HWT had water free of thermotolerant coliforms at all three water sampling points. Our findings raise questions about the value of the data gathered through the international monitoring of HWT as predictors of water quality in the home, as well as questioning the ability of HWT, as actually practiced by vulnerable populations, to reduce exposure to waterborne diseases.

  9. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  10. Implicit leadership theories in applied settings: factor structure, generalizability, and stability over time.

    PubMed

    Epitropaki, Olga; Martin, Robin

    2004-04-01

    The present empirical investigation had a 3-fold purpose: (a) to cross-validate L. R. Offermann, J. K. Kennedy, and P. W. Wirtz's (1994) scale of Implicit Leadership Theories (ILTs) in several organizational settings and to further provide a shorter scale of ILTs in organizations; (b) to assess the generalizability of ILTs across different employee groups, and (c) to evaluate ILTs' change over time. Two independent samples were used for the scale validation (N1 = 500 and N2 = 439). A 6-factor structure (Sensitivity, Intelligence, Dedication, Dynamism, Tyranny, and Masculinity) was found to most accurately represent ELTs in organizational settings. Regarding the generalizability of ILTs, although the 6-factor structure was consistent across different employee groups, there was only partial support for total factorial invariance. Finally, evaluation of gamma, beta, and alpha change provided support for ILTs' stability over time.

  11. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  12. Estimation of the left ventricular shape and motion with a limited number of slices

    NASA Astrophysics Data System (ADS)

    Robert, Anne; Schmitt, Francis J. M.; Mousseaux, Elie

    1996-04-01

    In this paper, we describe a method for the reconstruction of the surface of the left ventricle from a set of lacunary data (that is an incomplete, unevenly sampled and unstructured data set). Global models, because they compress the properties of a surface into a small set of parameters, have a strong regularizing power and are therefore very well suited to lacunary data. Globally deformable superquadrics are particularly attractive, because of their simplicity. This model can be fitted to the data using the Levenberg-Marquardt algorithm for non-linear optimization. However, the difficulties we experienced to get temporally consistent solutions as well as the intrinsic 4D character of the data led us to generalize the classical 3D superquadric model to 4D. We present results on a 4D sequence from the Dynamic Spatial Reconstructor of the Mayo Clinic, and on a 4D IRM sequence.

  13. The Grism Lens-Amplified Survey from Space (GLASS). V. Extent and Spatial Distribution of Star Formation in z ~ 0.5 Cluster Galaxies

    NASA Astrophysics Data System (ADS)

    Vulcani, Benedetta; Treu, Tommaso; Schmidt, Kasper B.; Poggianti, Bianca M.; Dressler, Alan; Fontana, Adriano; Bradač, Marusa; Brammer, Gabriel B.; Hoag, Austin; Huang, Kuan-Han; Malkan, Matthew; Pentericci, Laura; Trenti, Michele; von der Linden, Anja; Abramson, Louis; He, Julie; Morris, Glenn

    2015-12-01

    We present the first study of the spatial distribution of star formation in z ˜ 0.5 cluster galaxies. The analysis is based on data taken with the Wide Field Camera 3 as part of the Grism Lens-Amplified Survey from Space (GLASS). We illustrate the methodology by focusing on two clusters (MACS 0717.5+3745 and MACS 1423.8+2404) with different morphologies (one relaxed and one merging) and use foreground and background galaxies as a field control sample. The cluster+field sample consists of 42 galaxies with stellar masses in the range 108-1011 M⊙ and star formation rates in the range 1-20 M⊙ yr-1. Both in clusters and in the field, Hα is more extended than the rest-frame UV continuum in 60% of the cases, consistent with diffuse star formation and inside-out growth. In ˜20% of the cases, the Hα emission appears more extended in cluster galaxies than in the field, pointing perhaps to ionized gas being stripped and/or star formation being enhanced at large radii. The peak of the Hα emission and that of the continuum are offset by less than 1 kpc. We investigate trends with the hot gas density as traced by the X-ray emission, and with the surface mass density as inferred from gravitational lens models, and find no conclusive results. The diversity of morphologies and sizes observed in Hα illustrates the complexity of the environmental processes that regulate star formation. Upcoming analysis of the full GLASS data set will increase our sample size by almost an order of magnitude, verifying and strengthening the inference from this initial data set.

  14. Magnetic monopole search with the MoEDAL test trapping detector

    NASA Astrophysics Data System (ADS)

    Katre, Akshay

    2016-11-01

    IMoEDAL is designed to search for monopoles produced in high-energy Large Hadron Collider (LHC) collisions, based on two complementary techniques: nucleartrack detectors for high-ionisation signatures and other highly ionising avatars of new physics, and trapping volumes for direct magnetic charge measurements with a superconducting magnetometer. The MoEDAL test trapping detector array deployed in 2012, consisting of over 600 aluminium samples, was analysed and found to be consistent with zero trapped magnetic charge. Stopping acceptances are obtained from a simulation of monopole propagation in matter for a range of charges and masses, allowing to set modelindependent and model-dependent limits on monopole production cross sections. Multiples of the fundamental Dirac magnetic charge are probed for the first time at the LHC.

  15. RNA-seq reveals more consistent reference genes for gene expression studies in human non-melanoma skin cancers

    PubMed Central

    Tan, Jean-Marie; Payne, Elizabeth J.; Lin, Lynlee L.; Sinnya, Sudipta; Raphael, Anthony P.; Lambie, Duncan; Frazer, Ian H.; Dinger, Marcel E.; Soyer, H. Peter

    2017-01-01

    Identification of appropriate reference genes (RGs) is critical to accurate data interpretation in quantitative real-time PCR (qPCR) experiments. In this study, we have utilised next generation RNA sequencing (RNA-seq) to analyse the transcriptome of a panel of non-melanoma skin cancer lesions, identifying genes that are consistently expressed across all samples. Genes encoding ribosomal proteins were amongst the most stable in this dataset. Validation of this RNA-seq data was examined using qPCR to confirm the suitability of a set of highly stable genes for use as qPCR RGs. These genes will provide a valuable resource for the normalisation of qPCR data for the analysis of non-melanoma skin cancer. PMID:28852586

  16. A multi-year data set on aerosol-cloud-precipitation-meteorology interactions for marine stratocumulus clouds.

    PubMed

    Sorooshian, Armin; MacDonald, Alexander B; Dadashazar, Hossein; Bates, Kelvin H; Coggon, Matthew M; Craven, Jill S; Crosbie, Ewan; Hersey, Scott P; Hodas, Natasha; Lin, Jack J; Negrón Marty, Arnaldo; Maudlin, Lindsay C; Metcalf, Andrew R; Murphy, Shane M; Padró, Luz T; Prabhakar, Gouri; Rissman, Tracey A; Shingler, Taylor; Varutbangkul, Varuntida; Wang, Zhen; Woods, Roy K; Chuang, Patrick Y; Nenes, Athanasios; Jonsson, Haflidi H; Flagan, Richard C; Seinfeld, John H

    2018-02-27

    Airborne measurements of meteorological, aerosol, and stratocumulus cloud properties have been harmonized from six field campaigns during July-August months between 2005 and 2016 off the California coast. A consistent set of core instruments was deployed on the Center for Interdisciplinary Remotely-Piloted Aircraft Studies Twin Otter for 113 flight days, amounting to 514 flight hours. A unique aspect of the compiled data set is detailed measurements of aerosol microphysical properties (size distribution, composition, bioaerosol detection, hygroscopicity, optical), cloud water composition, and different sampling inlets to distinguish between clear air aerosol, interstitial in-cloud aerosol, and droplet residual particles in cloud. Measurements and data analysis follow documented methods for quality assurance. The data set is suitable for studies associated with aerosol-cloud-precipitation-meteorology-radiation interactions, especially owing to sharp aerosol perturbations from ship traffic and biomass burning. The data set can be used for model initialization and synergistic application with meteorological models and remote sensing data to improve understanding of the very interactions that comprise the largest uncertainty in the effect of anthropogenic emissions on radiative forcing.

  17. Linking Family Characteristics with Poor Peer Relations: The Mediating Role of Conduct Problems

    PubMed Central

    Bierman, Karen Linn; Smoot, David L.

    2012-01-01

    Parent, teacher, and peer ratings were collected for 75 grade school boys to test the hypothesis that certain family interaction patterns would be associated with poor peer relations. Path analyses provided support for a mediational model, in which punitive and ineffective discipline was related to child conduct problems in home and school settings which, in turn, predicted poor peer relations. Further analyses suggested that distinct subgroups of boys could be identified who exhibited conduct problems at home only, at school only, in both settings, or in neither setting. Boys who exhibited cross-situational conduct problems were more likely to experience multiple concurrent problems (e.g., in both home and school settings) and were more likely than any other group to experience poor peer relations. However, only about one-third of the boys with poor peer relations in this sample exhibited problem profiles consistent with the proposed model (e.g., experienced high rates of punitive/ineffective home discipline and exhibited conduct problems in home and school settings), suggesting that the proposed model reflects one common (but not exclusive) pathway to poor peer relations. PMID:1865049

  18. A multi-year data set on aerosol-cloud-precipitation-meteorology interactions for marine stratocumulus clouds

    PubMed Central

    Sorooshian, Armin; MacDonald, Alexander B.; Dadashazar, Hossein; Bates, Kelvin H.; Coggon, Matthew M.; Craven, Jill S.; Crosbie, Ewan; Hersey, Scott P.; Hodas, Natasha; Lin, Jack J.; Negrón Marty, Arnaldo; Maudlin, Lindsay C.; Metcalf, Andrew R.; Murphy, Shane M.; Padró, Luz T.; Prabhakar, Gouri; Rissman, Tracey A.; Shingler, Taylor; Varutbangkul, Varuntida; Wang, Zhen; Woods, Roy K.; Chuang, Patrick Y.; Nenes, Athanasios; Jonsson, Haflidi H.; Flagan, Richard C.; Seinfeld, John H.

    2018-01-01

    Airborne measurements of meteorological, aerosol, and stratocumulus cloud properties have been harmonized from six field campaigns during July-August months between 2005 and 2016 off the California coast. A consistent set of core instruments was deployed on the Center for Interdisciplinary Remotely-Piloted Aircraft Studies Twin Otter for 113 flight days, amounting to 514 flight hours. A unique aspect of the compiled data set is detailed measurements of aerosol microphysical properties (size distribution, composition, bioaerosol detection, hygroscopicity, optical), cloud water composition, and different sampling inlets to distinguish between clear air aerosol, interstitial in-cloud aerosol, and droplet residual particles in cloud. Measurements and data analysis follow documented methods for quality assurance. The data set is suitable for studies associated with aerosol-cloud-precipitation-meteorology-radiation interactions, especially owing to sharp aerosol perturbations from ship traffic and biomass burning. The data set can be used for model initialization and synergistic application with meteorological models and remote sensing data to improve understanding of the very interactions that comprise the largest uncertainty in the effect of anthropogenic emissions on radiative forcing. PMID:29485627

  19. A multi-year data set on aerosol-cloud-precipitation-meteorology interactions for marine stratocumulus clouds

    NASA Astrophysics Data System (ADS)

    Sorooshian, Armin; MacDonald, Alexander B.; Dadashazar, Hossein; Bates, Kelvin H.; Coggon, Matthew M.; Craven, Jill S.; Crosbie, Ewan; Hersey, Scott P.; Hodas, Natasha; Lin, Jack J.; Negrón Marty, Arnaldo; Maudlin, Lindsay C.; Metcalf, Andrew R.; Murphy, Shane M.; Padró, Luz T.; Prabhakar, Gouri; Rissman, Tracey A.; Shingler, Taylor; Varutbangkul, Varuntida; Wang, Zhen; Woods, Roy K.; Chuang, Patrick Y.; Nenes, Athanasios; Jonsson, Haflidi H.; Flagan, Richard C.; Seinfeld, John H.

    2018-02-01

    Airborne measurements of meteorological, aerosol, and stratocumulus cloud properties have been harmonized from six field campaigns during July-August months between 2005 and 2016 off the California coast. A consistent set of core instruments was deployed on the Center for Interdisciplinary Remotely-Piloted Aircraft Studies Twin Otter for 113 flight days, amounting to 514 flight hours. A unique aspect of the compiled data set is detailed measurements of aerosol microphysical properties (size distribution, composition, bioaerosol detection, hygroscopicity, optical), cloud water composition, and different sampling inlets to distinguish between clear air aerosol, interstitial in-cloud aerosol, and droplet residual particles in cloud. Measurements and data analysis follow documented methods for quality assurance. The data set is suitable for studies associated with aerosol-cloud-precipitation-meteorology-radiation interactions, especially owing to sharp aerosol perturbations from ship traffic and biomass burning. The data set can be used for model initialization and synergistic application with meteorological models and remote sensing data to improve understanding of the very interactions that comprise the largest uncertainty in the effect of anthropogenic emissions on radiative forcing.

  20. Diagnosis and molecular characterization of Trichomonas vaginalis in sex workers in the Philippines

    PubMed Central

    Queza, Macario Ireneo P; Rivera, Windell L

    2013-01-01

    Trichomonas vaginalis is a pathogenic protozoon which causes the sexually transmitted infection, trichomoniasis. The absence or non-specificity of symptoms often leads to misdiagnosis of the infection. In this study, 969 samples consisting of vaginal swabs and urine were collected and screened from social hygiene clinics across the Philippines. Of the 969 samples, 216 were used for the comparative analysis of diagnostic tools such as wet mount microscopy, culture, and PCR utilizing universal trichomonad primers, TFR1/2 and species-specific primers, TVK3/7 and TV1/2. PCR demonstrated higher sensitivity of 100% compared to 77% of the wet mount. PCR primer set TVK3/7 and culture had the same and the best expected average performance [receiver-operating characteristic (ROC): 0.98]. Prevalence of infection in the sample population was 6.8%. PMID:23683368

  1. Neutron, fluorescence, and optical imaging: An in situ combination of complementary techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, D.; Egelhaaf, S. U.; Hermes, H. E.

    2015-09-15

    An apparatus which enables the simultaneous combination of three complementary imaging techniques, optical imaging, fluorescence imaging, and neutron radiography, is presented. While each individual technique can provide information on certain aspects of the sample and their time evolution, a combination of the three techniques in one setup provides a more complete and consistent data set. The setup can be used in transmission and reflection modes and thus with optically transparent as well as opaque samples. Its capabilities are illustrated with two examples. A polymer hydrogel represents a transparent sample and the diffusion of fluorescent particles into and through this polymermore » matrix is followed. In reflection mode, the absorption of solvent by a nile red-functionalized mesoporous silica powder and the corresponding change in fluorescent signal are studied.« less

  2. A global analysis of Y-chromosomal haplotype diversity for 23 STR loci

    PubMed Central

    Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M.T.; Santos, Lorna H.; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F.; Domingues, Patricia M.; Chamoun, Wafaa Takash; Coble, Michael D.; Hill, Carolyn R.; Corach, Daniel; Caputo, Mariela; D’Amato, Maria E.; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H.D.; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E.; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E.; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M.; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S. Moura; Nogueira, Tatiana L.S.; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K.; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L.; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U.; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M.; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H.; Gwynne, Gareth M.; Jobling, Mark A.; Whittle, Martin R.; Sumita, Denilce R.; Wolańska-Nowak, Paulina; Yong, Rita Y.Y.; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz

    2014-01-01

    In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. PMID:24854874

  3. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    PubMed

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  4. Assessing Agreement between Multiple Raters with Missing Rating Information, Applied to Breast Cancer Tumour Grading

    PubMed Central

    Ellis, Ian O.; Green, Andrew R.; Hanka, Rudolf

    2008-01-01

    Background We consider the problem of assessing inter-rater agreement when there are missing data and a large number of raters. Previous studies have shown only ‘moderate’ agreement between pathologists in grading breast cancer tumour specimens. We analyse a large but incomplete data-set consisting of 24177 grades, on a discrete 1–3 scale, provided by 732 pathologists for 52 samples. Methodology/Principal Findings We review existing methods for analysing inter-rater agreement for multiple raters and demonstrate two further methods. Firstly, we examine a simple non-chance-corrected agreement score based on the observed proportion of agreements with the consensus for each sample, which makes no allowance for missing data. Secondly, treating grades as lying on a continuous scale representing tumour severity, we use a Bayesian latent trait method to model cumulative probabilities of assigning grade values as functions of the severity and clarity of the tumour and of rater-specific parameters representing boundaries between grades 1–2 and 2–3. We simulate from the fitted model to estimate, for each rater, the probability of agreement with the majority. Both methods suggest that there are differences between raters in terms of rating behaviour, most often caused by consistent over- or under-estimation of the grade boundaries, and also considerable variability in the distribution of grades assigned to many individual samples. The Bayesian model addresses the tendency of the agreement score to be biased upwards for raters who, by chance, see a relatively ‘easy’ set of samples. Conclusions/Significance Latent trait models can be adapted to provide novel information about the nature of inter-rater agreement when the number of raters is large and there are missing data. In this large study there is substantial variability between pathologists and uncertainty in the identity of the ‘true’ grade of many of the breast cancer tumours, a fact often ignored in clinical studies. PMID:18698346

  5. Cosmology and astrophysics from relaxed galaxy clusters - II. Cosmological constraints

    NASA Astrophysics Data System (ADS)

    Mantz, A. B.; Allen, S. W.; Morris, R. G.; Rapetti, D. A.; Applegate, D. E.; Kelly, P. L.; von der Linden, A.; Schmidt, R. W.

    2014-05-01

    This is the second in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. The data set employed here consists of Chandra observations of 40 such clusters, identified in a comprehensive search of the Chandra archive for hot (kT ≳ 5 keV), massive, morphologically relaxed systems, as well as high-quality weak gravitational lensing data for a subset of these clusters. Here we present cosmological constraints from measurements of the gas mass fraction, fgas, for this cluster sample. By incorporating a robust gravitational lensing calibration of the X-ray mass estimates, and restricting our measurements to the most self-similar and accurately measured regions of clusters, we significantly reduce systematic uncertainties compared to previous work. Our data for the first time constrain the intrinsic scatter in fgas, 7.4 ± 2.3 per cent in a spherical shell at radii 0.8-1.2 r2500 (˜1/4 of the virial radius), consistent with the expected level of variation in gas depletion and non-thermal pressure for relaxed clusters. From the lowest redshift data in our sample, five clusters at z < 0.16, we obtain a constraint on a combination of the Hubble parameter and cosmic baryon fraction, h3/2 Ωb/Ωm = 0.089 ± 0.012, that is insensitive to the nature of dark energy. Combining this with standard priors on h and Ωbh2 provides a tight constraint on the cosmic matter density, Ωm = 0.27 ± 0.04, which is similarly insensitive to dark energy. Using the entire cluster sample, extending to z > 1, we obtain consistent results for Ωm and interesting constraints on dark energy: Ω _{{Λ }}=0.65^{+0.17}_{-0.22}> for non-flat ΛCDM (cosmological constant) models, and w = -0.98 ± 0.26 for flat models with a constant dark energy equation of state. Our results are both competitive and consistent with those from recent cosmic microwave background, Type Ia supernova and baryon acoustic oscillation data. We present constraints on more complex models of evolving dark energy from the combination of fgas data with these external data sets, and comment on the possibilities for improved fgas constraints using current and next-generation X-ray observatories and lensing data.

  6. The Discovery of Novel Biomarkers Improves Breast Cancer Intrinsic Subtype Prediction and Reconciles the Labels in the METABRIC Data Set

    PubMed Central

    Milioli, Heloisa Helena; Vimieiro, Renato; Riveros, Carlos; Tishchenko, Inna; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The prediction of breast cancer intrinsic subtypes has been introduced as a valuable strategy to determine patient diagnosis and prognosis, and therapy response. The PAM50 method, based on the expression levels of 50 genes, uses a single sample predictor model to assign subtype labels to samples. Intrinsic errors reported within this assay demonstrate the challenge of identifying and understanding the breast cancer groups. In this study, we aim to: a) identify novel biomarkers for subtype individuation by exploring the competence of a newly proposed method named CM1 score, and b) apply an ensemble learning, as opposed to the use of a single classifier, for sample subtype assignment. The overarching objective is to improve class prediction. Methods and Findings The microarray transcriptome data sets used in this study are: the METABRIC breast cancer data recorded for over 2000 patients, and the public integrated source from ROCK database with 1570 samples. We first computed the CM1 score to identify the probes with highly discriminative patterns of expression across samples of each intrinsic subtype. We further assessed the ability of 42 selected probes on assigning correct subtype labels using 24 different classifiers from the Weka software suite. For comparison, the same method was applied on the list of 50 genes from the PAM50 method. Conclusions The CM1 score portrayed 30 novel biomarkers for predicting breast cancer subtypes, with the confirmation of the role of 12 well-established genes. Intrinsic subtypes assigned using the CM1 list and the ensemble of classifiers are more consistent and homogeneous than the original PAM50 labels. The new subtypes show accurate distributions of current clinical markers ER, PR and HER2, and survival curves in the METABRIC and ROCK data sets. Remarkably, the paradoxical attribution of the original labels reinforces the limitations of employing a single sample classifiers to predict breast cancer intrinsic subtypes. PMID:26132585

  7. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogeneous across composite samples.« less

  8. Statistical Analysis of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  9. Statistical Analysis Of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  10. Assessment of systems for paying health care providers in Vietnam: implications for equity, efficiency and expanding effective health coverage.

    PubMed

    Phuong, Nguyen Khanh; Oanh, Tran Thi Mai; Phuong, Hoang Thi; Tien, Tran Van; Cashin, Cheryl

    2015-01-01

    Provider payment arrangements are currently a core concern for Vietnam's health sector and a key lever for expanding effective coverage and improving the efficiency and equity of the health system. This study describes how different provider payment systems are designed and implemented in practice across a sample of provinces and districts in Vietnam. Key informant interviews were conducted with over 100 health policy-makers, purchasers and providers using a structured interview guide. The results of the different payment methods were scored by respondents and assessed against a set of health system performance criteria. Overall, the public health insurance agency, Vietnam Social Security (VSS), is focused on managing expenditures through a complicated set of reimbursement policies and caps, but the incentives for providers are unclear and do not consistently support Vietnam's health system objectives. The results of this study are being used by the Ministry of Health and VSS to reform the provider payment systems to be more consistent with international definitions and good practices and to better support Vietnam's health system objectives.

  11. Ridges and tidal stress on Io

    USGS Publications Warehouse

    Bart, G.D.; Turtle, E.P.; Jaeger, W.L.; Keszthelyi, L.P.; Greenberg, R.

    2004-01-01

    Sets of ridges of uncertain origin are seen in twenty-nine high-resolution Galileo images, which sample seven locales on Io. These ridges are on the order of a few kilometers in length with a spacing of about a kilometer. Within each locale, the ridges have a consistent orientation, but the orientations vary from place to place. We investigate whether these ridges could be a result of tidal flexing of Io by comparing their orientations with the peak tidal stress orientations at the same locations. We find that ridges grouped near the equator are aligned either north-south or east-west, as are the predicted principal stress orientations there. It is not clear why particular groups run north-south and others east-west. The one set of ridges observed far from the equator (52?? S) has an oblique azimuth, as do the tidal stresses at those latitudes. Therefore, all observed ridges have similar orientations to the tidal stress in their region. This correlation is consistent with the hypothesis that tidal flexing of Io plays an important role in ridge formation. ?? 2004 Elsevier Inc. All rights reserved.

  12. GOES Type III Loop Heat Pipe Life Test Results

    NASA Technical Reports Server (NTRS)

    Ottenstein, Laura

    2011-01-01

    The GOES Type III Loop Heat Pipe (LHP) was built as a life test unit for the loop heat pipes on the GOES N-Q series satellites. This propylene LHP was built by Dynatherm Corporation in 2000 and tested continuously for approximately 14 months. It was then put into storage for 3 years. Following the storage period, the LHP was tested at Swales Aerospace to verify that the loop performance hadn t changed. Most test results were consistent with earlier results. At the conclusion of testing at Swales, the LHP was transferred to NASA/GSFC for continued periodic testing. The LHP has been set up for testing in the Thermal Lab at GSFC since 2006. A group of tests consisting of start-ups, power cycles, and a heat transport limit test have been performed every six to nine months since March 2006. Tests results have shown no change in the loop performance over the five years of testing. This presentation will discuss the test hardware, test set-up, and tests performed. Test results to be presented include sample plots from individual tests, along with conductance measurements for all tests performed.

  13. Inter-rater reliability and review of the VA unresolved narratives.

    PubMed Central

    Eagon, J. C.; Hurdle, J. F.; Lincoln, M. J.

    1996-01-01

    To better understand how VA clinicians use medical vocabulary in every day practice, we set out to characterize terms generated in the Problem List module of the VA's DHCP system that were not mapped to terms in the controlled-vocabulary lexicon of DHCP. When entered terms fail to match those in the lexicon, a note is sent to a central repository. When our study started, the volume in that repository had reached 16,783 terms. We wished to characterize the potential reasons why these terms failed to match terms in the lexicon. After examining two small samples of randomly selected terms, we used group consensus to develop a set of rating criteria and a rating form. To be sure that the results of multiple reviewers could be confidently compared, we analyzed the inter-rater agreement of our rating process. Two rates used this form to rate the same 400 terms. We found that modifiers and numeric data were common and consistent reasons for failure to match, while others such as use of synonyms and absence of the concept from the lexicon were common but less consistently selected. PMID:8947642

  14. Inter-rater reliability and review of the VA unresolved narratives.

    PubMed

    Eagon, J C; Hurdle, J F; Lincoln, M J

    1996-01-01

    To better understand how VA clinicians use medical vocabulary in every day practice, we set out to characterize terms generated in the Problem List module of the VA's DHCP system that were not mapped to terms in the controlled-vocabulary lexicon of DHCP. When entered terms fail to match those in the lexicon, a note is sent to a central repository. When our study started, the volume in that repository had reached 16,783 terms. We wished to characterize the potential reasons why these terms failed to match terms in the lexicon. After examining two small samples of randomly selected terms, we used group consensus to develop a set of rating criteria and a rating form. To be sure that the results of multiple reviewers could be confidently compared, we analyzed the inter-rater agreement of our rating process. Two rates used this form to rate the same 400 terms. We found that modifiers and numeric data were common and consistent reasons for failure to match, while others such as use of synonyms and absence of the concept from the lexicon were common but less consistently selected.

  15. Non-linear patterns in age-related DNA methylation may reflect CD4+ T cell differentiation

    PubMed Central

    Johnson, Nicholas D.; Wiener, Howard W.; Smith, Alicia K.; Nishitani, Shota; Absher, Devin M.; Arnett, Donna K.; Aslibekyan, Stella; Conneely, Karen N.

    2017-01-01

    ABSTRACT DNA methylation (DNAm) is an important epigenetic process involved in the regulation of gene expression. While many studies have identified thousands of loci associated with age, few have differentiated between linear and non-linear DNAm trends with age. Non-linear trends could indicate early- or late-life gene regulatory processes. Using data from the Illumina 450K array on 336 human peripheral blood samples, we identified 21 CpG sites that associated with age (P<1.03E-7) and exhibited changing rates of DNAm change with age (P<1.94E-6). For 2 of these CpG sites (cg07955995 and cg22285878), DNAm increased with age at an increasing rate, indicating that differential DNAm was greatest among elderly individuals. We observed significant replication for both CpG sites (P<5.0E-8) in a second set of peripheral blood samples. In 8 of 9 additional data sets comprising samples of monocytes, T cell subtypes, and brain tissue, we observed a pattern directionally consistent with DNAm increasing with age at an increasing rate, which was nominally significant in the 3 largest data sets (4.3E-15

  16. Micro X-ray Fluorescence Study of Late Pre-Hispanic Ceramics from the Western Slopes of the South Central Andes Region in the Arica y Parinacota Region, Chile: A New Methodological Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flewett, S.; Saintenoy, T.; Sepulveda, M.

    Archeological ceramic paste material typically consists of a mix of a clay matrix and various millimeter and sub-millimeter sized mineral inclusions. Micro X-ray Fluorescence (μXRF) is a standard compositional classification tool, and in this work we propose and demonstrate an improved fluorescence map processing protocol where the mineral inclusions are automatically separated from the clay matrix to allow independent statistical analysis of the two parts. Application of this protocol allowed us to improve enhance the differentiation discrimination between different ceramic shards compared with the standard procedure of comparing working with only the spatially averaged elemental concentrations. Using the new protocol,more » we performed an initial compositional classification of a set of 83 ceramic shards from the western slopes of the south central Andean region in the Arica y Parinacota region of present-day far northern Chile. Comparing the classifications obtained using the new versus the old (average concentrations only) protocols, we found that some samples were erroneously classified with the old protocol. From an archaeological perspective, a very broad and heterogeneous sample set was used in this study due to the fact that this was the first such study to be performed on ceramics from this region. This allowed a general overview to be obtained, however further work on more specific sample sets will be necessary to extract concrete archaeological conclusions.« less

  17. Sensitive and specific detection of early gastric cancer with DNA methylation analysis of gastric washes.

    PubMed

    Watanabe, Yoshiyuki; Kim, Hyun Soo; Castoro, Ryan J; Chung, Woonbok; Estecio, Marcos R H; Kondo, Kimie; Guo, Yi; Ahmed, Saira S; Toyota, Minoru; Itoh, Fumio; Suk, Ki Tae; Cho, Mee-Yon; Shen, Lanlan; Jelinek, Jaroslav; Issa, Jean-Pierre J

    2009-06-01

    Aberrant DNA methylation is an early and frequent process in gastric carcinogenesis and could be useful for detection of gastric neoplasia. We hypothesized that methylation analysis of DNA recovered from gastric washes could be used to detect gastric cancer. We studied 51 candidate genes in 7 gastric cancer cell lines and 24 samples (training set) and identified 6 for further studies. We examined the methylation status of these genes in a test set consisting of 131 gastric neoplasias at various stages. Finally, we validated the 6 candidate genes in a different population of 40 primary gastric cancer samples and 113 nonneoplastic gastric mucosa samples. Six genes (MINT25, RORA, GDNF, ADAM23, PRDM5, MLF1) showed frequent differential methylation between gastric cancer and normal mucosa in the training, test, and validation sets. GDNF and MINT25 were most sensitive molecular markers of early stage gastric cancer, whereas PRDM5 and MLF1 were markers of a field defect. There was a close correlation (r = 0.5-0.9, P = .03-.001) between methylation levels in tumor biopsy and gastric washes. MINT25 methylation had the best sensitivity (90%), specificity (96%), and area under the receiver operating characteristic curve (0.961) in terms of tumor detection in gastric washes. These findings suggest MINT25 is a sensitive and specific marker for screening in gastric cancer. Additionally, we have developed a new method for gastric cancer detection by DNA methylation in gastric washes.

  18. Turning publicly available gene expression data into discoveries using gene set context analysis.

    PubMed

    Ji, Zhicheng; Vokes, Steven A; Dang, Chi V; Ji, Hongkai

    2016-01-08

    Gene Set Context Analysis (GSCA) is an open source software package to help researchers use massive amounts of publicly available gene expression data (PED) to make discoveries. Users can interactively visualize and explore gene and gene set activities in 25,000+ consistently normalized human and mouse gene expression samples representing diverse biological contexts (e.g. different cells, tissues and disease types, etc.). By providing one or multiple genes or gene sets as input and specifying a gene set activity pattern of interest, users can query the expression compendium to systematically identify biological contexts associated with the specified gene set activity pattern. In this way, researchers with new gene sets from their own experiments may discover previously unknown contexts of gene set functions and hence increase the value of their experiments. GSCA has a graphical user interface (GUI). The GUI makes the analysis convenient and customizable. Analysis results can be conveniently exported as publication quality figures and tables. GSCA is available at https://github.com/zji90/GSCA. This software significantly lowers the bar for biomedical investigators to use PED in their daily research for generating and screening hypotheses, which was previously difficult because of the complexity, heterogeneity and size of the data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  20. Psychosis prediction in secondary mental health services. A broad, comprehensive approach to the "at risk mental state" syndrome.

    PubMed

    Francesconi, M; Minichino, A; Carrión, R E; Delle Chiaie, R; Bevilacqua, A; Parisi, M; Rullo, S; Bersani, F Saverio; Biondi, M; Cadenhead, K

    2017-02-01

    Accuracy of risk algorithms for psychosis prediction in "at risk mental state" (ARMS) samples may differ according to the recruitment setting. Standardized criteria used to detect ARMS individuals may lack specificity if the recruitment setting is a secondary mental health service. The authors tested a modified strategy to predict psychosis conversion in this setting by using a systematic selection of trait-markers of the psychosis prodrome in a sample with a heterogeneous ARMS status. 138 non-psychotic outpatients (aged 17-31) were consecutively recruited in secondary mental health services and followed-up for up to 3 years (mean follow-up time, 2.2 years; SD=0.9). Baseline ARMS status, clinical, demographic, cognitive, and neurological soft signs measures were collected. Cox regression was used to derive a risk index. 48% individuals met ARMS criteria (ARMS-Positive, ARMS+). Conversion rate to psychosis was 21% for the overall sample, 34% for ARMS+, and 9% for ARMS-Negative (ARMS-). The final predictor model with a positive predictive validity of 80% consisted of four variables: Disorder of Thought Content, visuospatial/constructional deficits, sensory-integration, and theory-of-mind abnormalities. Removing Disorder of Thought Content from the model only slightly modified the predictive accuracy (-6.2%), but increased the sensitivity (+9.5%). These results suggest that in a secondary mental health setting the use of trait-markers of the psychosis prodrome may predict psychosis conversion with great accuracy despite the heterogeneity of the ARMS status. The use of the proposed predictive algorithm may enable a selective recruitment, potentially reducing duration of untreated psychosis and improving prognostic outcomes. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  1. Emotion dysregulation and autonomic responses to film, rumination, and body awareness: Extending psychophysiological research to a naturalistic clinical setting and a chemically dependent female sample.

    PubMed

    Crowell, Sheila E; Price, Cynthia J; Puzia, Megan E; Yaptangco, Mona; Cheng, Sunny Chieh

    2017-05-01

    Substance use is a complex clinical problem characterized by emotion dysregulation and daily challenges that can interfere with laboratory research. Thus, few psychophysiological studies examine autonomic and self-report measures of emotion dysregulation with multidiagnostic, chemically dependent samples or extend this work into naturalistic settings. In this study, we used a within-subject design to examine changes in respiratory sinus arrhythmia (RSA), electrodermal activity (EDA), and self-reported affect across three tasks designed to elicit distinct psychophysiological and emotional response patterns. We also examined emotion dysregulation as a moderator of psychophysiological responses. Participants include 116 women with multiple comorbid mental health conditions enrolled in substance use treatment, many of whom also reported high emotion dysregulation. Participants were assessed in the treatment setting and completed three tasks: watching a sad movie clip, rumination on a stressful event, and a mindful interoceptive awareness meditation. Multilevel models were used to examine changes from resting baselines to the tasks. During the film, results indicate a significant decrease in RSA and an increase in EDA. For the rumination task, participants showed a decrease in RSA but no EDA response. For the body awareness task, there was an increase in RSA and a decrease in EDA. Emotion dysregulation was associated with differences in baseline RSA but not with EDA or with the slope of response patterns across tasks. Self-reported affect was largely consistent with autonomic patterns. Findings add to the literature on emotion dysregulation, substance use, and the translation of psychophysiological measurements into clinical settings with complex samples. © 2017 Society for Psychophysiological Research.

  2. Epidemiology and Impact of Campylobacter Infection in Children in 8 Low-Resource Settings: Results From the MAL-ED Study.

    PubMed

    Amour, Caroline; Gratz, Jean; Mduma, Estomih; Svensen, Erling; Rogawski, Elizabeth T; McGrath, Monica; Seidman, Jessica C; McCormick, Benjamin J J; Shrestha, Sanjaya; Samie, Amidou; Mahfuz, Mustafa; Qureshi, Shahida; Hotwani, Aneeta; Babji, Sudhir; Trigoso, Dixner Rengifo; Lima, Aldo A M; Bodhidatta, Ladaporn; Bessong, Pascal; Ahmed, Tahmeed; Shakoor, Sadia; Kang, Gagandeep; Kosek, Margaret; Guerrant, Richard L; Lang, Dennis; Gottlieb, Michael; Houpt, Eric R; Platts-Mills, James A

    2016-11-01

     Enteropathogen infections have been associated with enteric dysfunction and impaired growth in children in low-resource settings. In a multisite birth cohort study (MAL-ED), we describe the epidemiology and impact of Campylobacter infection in the first 2 years of life.  Children were actively followed up until 24 months of age. Diarrheal and nondiarrheal stool samples were collected and tested by enzyme immunoassay for Campylobacter Stool and blood samples were assayed for markers of intestinal permeability and inflammation.  A total of 1892 children had 7601 diarrheal and 26 267 nondiarrheal stool samples tested for Campylobacter We describe a high prevalence of infection, with most children (n = 1606; 84.9%) having a Campylobacter-positive stool sample by 1 year of age. Factors associated with a reduced risk of Campylobacter detection included exclusive breastfeeding (risk ratio, 0.57; 95% confidence interval, .47-.67), treatment of drinking water (0.76; 0.70-0.83), access to an improved latrine (0.89; 0.82-0.97), and recent macrolide antibiotic use (0.68; 0.63-0.74). A high Campylobacter burden was associated with a lower length-for-age Z score at 24 months (-1.82; 95% confidence interval, -1.94 to -1.70) compared with a low burden (-1.49; -1.60 to -1.38). This association was robust to confounders and consistent across sites. Campylobacter infection was also associated with increased intestinal permeability and intestinal and systemic inflammation.  Campylobacter was prevalent across diverse settings and associated with growth shortfalls. Promotion of exclusive breastfeeding, drinking water treatment, improved latrines, and targeted antibiotic treatment may reduce the burden of Campylobacter infection and improve growth in children in these settings. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.

  3. Epidemiology and Impact of Campylobacter Infection in Children in 8 Low-Resource Settings: Results From the MAL-ED Study

    PubMed Central

    Amour, Caroline; Gratz, Jean; Mduma, Estomih; Svensen, Erling; Rogawski, Elizabeth T.; McGrath, Monica; Seidman, Jessica C.; McCormick, Benjamin J. J.; Shrestha, Sanjaya; Samie, Amidou; Mahfuz, Mustafa; Qureshi, Shahida; Hotwani, Aneeta; Babji, Sudhir; Trigoso, Dixner Rengifo; Lima, Aldo A. M.; Bodhidatta, Ladaporn; Bessong, Pascal; Ahmed, Tahmeed; Shakoor, Sadia; Kang, Gagandeep; Kosek, Margaret; Guerrant, Richard L.; Lang, Dennis; Gottlieb, Michael; Houpt, Eric R.; Platts-Mills, James A.; Acosta, Angel Mendez; de Burga, Rosa Rios; Chavez, Cesar Banda; Flores, Julian Torres; Olotegui, Maribel Paredes; Pinedo, Silvia Rengifo; Salas, Mery Siguas; Trigoso, Dixner Rengifo; Vasquez, Angel Orbe; Ahmed, Imran; Alam, Didar; Ali, Asad; Bhutta, Zulfiqar A.; Qureshi, Shahida; Rasheed, Muneera; Soofi, Sajid; Turab, Ali; Zaidi, Anita K.M.; Bodhidatta, Ladaporn; Mason, Carl J.; Babji, Sudhir; Bose, Anuradha; George, Ajila T.; Hariraju, Dinesh; Jennifer, M. Steffi; John, Sushil; Kaki, Shiny; Kang, Gagandeep; Karunakaran, Priyadarshani; Koshy, Beena; Lazarus, Robin P.; Muliyil, Jayaprakash; Raghava, Mohan Venkata; Raju, Sophy; Ramachandran, Anup; Ramadas, Rakhi; Ramanujam, Karthikeyan; Rose, Anuradha; Roshan, Reeba; Sharma, Srujan L.; Sundaram, Shanmuga; Thomas, Rahul J.; Pan, William K.; Ambikapathi, Ramya; Carreon, J. Daniel; Charu, Vivek; Doan, Viyada; Graham, Jhanelle; Hoest, Christel; Knobler, Stacey; Lang, Dennis R.; McCormick, Benjamin J.J.; McGrath, Monica; Miller, Mark A.; Mohale, Archana; Nayyar, Gaurvika; Psaki, Stephanie; Rasmussen, Zeba; Richard, Stephanie A.; Seidman, Jessica C.; Wang, Vivian; Blank, Rebecca; Gottlieb, Michael; Tountas, Karen H.; Amour, Caroline; Bayyo, Eliwaza; Mduma, Estomih R.; Mvungi, Regisiana; Nshama, Rosemary; Pascal, John; Swema, Buliga Mujaga; Yarrot, Ladislaus; Ahmed, Tahmeed; Ahmed, A.M. Shamsir; Haque, Rashidul; Hossain, Iqbal; Islam, Munirul; Mahfuz, Mustafa; Mondal, Dinesh; Tofail, Fahmida; Chandyo, Ram Krishna; Shrestha, Prakash Sunder; Shrestha, Rita; Ulak, Manjeswori; Bauck, Aubrey; Black, Robert; Caulfield, Laura; Checkley, William; Kosek, Margaret N.; Lee, Gwenyth; Schulze, Kerry; Yori, Pablo Peñataro; Murray-Kolb, Laura E.; Ross, A. Catharine; Schaefer, Barbara; Simons, Suzanne; Pendergast, Laura; Abreu, Cláudia B.; Costa, Hilda; Di Moura, Alessandra; Filho, José Quirino; Havt, Alexandre; Leite, Álvaro M.; Lima, Aldo A.M.; Lima, Noélia L.; Lima, Ila F.; Maciel, Bruna L.L.; Medeiros, Pedro H.Q.S.; Moraes, Milena; Mota, Francisco S.; Oriá, Reinaldo B.; Quetz, Josiane; Soares, Alberto M.; Mota, Rosa M.S.; Patil, Crystal L.; Bessong, Pascal; Mahopo, Cloupas; Maphula, Angelina; Nyathi, Emanuel; Samie, Amidou; Barrett, Leah; Dillingham, Rebecca; Gratz, Jean; Guerrant, Richard L.; Houpt, Eric; Petri, William A.; Platts-Mills, James; Scharf, Rebecca; Shrestha, Binob; Shrestha, Sanjaya Kumar; Strand, Tor; Svensen, Erling

    2016-01-01

    Abstract Background. Enteropathogen infections have been associated with enteric dysfunction and impaired growth in children in low-resource settings. In a multisite birth cohort study (MAL-ED), we describe the epidemiology and impact of Campylobacter infection in the first 2 years of life. Methods. Children were actively followed up until 24 months of age. Diarrheal and nondiarrheal stool samples were collected and tested by enzyme immunoassay for Campylobacter. Stool and blood samples were assayed for markers of intestinal permeability and inflammation. Results. A total of 1892 children had 7601 diarrheal and 26 267 nondiarrheal stool samples tested for Campylobacter. We describe a high prevalence of infection, with most children (n = 1606; 84.9%) having a Campylobacter-positive stool sample by 1 year of age. Factors associated with a reduced risk of Campylobacter detection included exclusive breastfeeding (risk ratio, 0.57; 95% confidence interval, .47–.67), treatment of drinking water (0.76; 0.70–0.83), access to an improved latrine (0.89; 0.82–0.97), and recent macrolide antibiotic use (0.68; 0.63–0.74). A high Campylobacter burden was associated with a lower length-for-age Z score at 24 months (−1.82; 95% confidence interval, −1.94 to −1.70) compared with a low burden (−1.49; −1.60 to −1.38). This association was robust to confounders and consistent across sites. Campylobacter infection was also associated with increased intestinal permeability and intestinal and systemic inflammation. Conclusions. Campylobacter was prevalent across diverse settings and associated with growth shortfalls. Promotion of exclusive breastfeeding, drinking water treatment, improved latrines, and targeted antibiotic treatment may reduce the burden of Campylobacter infection and improve growth in children in these settings. PMID:27501842

  4. Development of the 3-SET 4P questionnaire for evaluating former ICU patients' physical and psychosocial problems over time: a pilot study.

    PubMed

    Akerman, Eva; Fridlund, Bengt; Ersson, Anders; Granberg-Axéll, Anetth

    2009-04-01

    Current studies reveal a lack of consensus for the evaluation of physical and psychosocial problems after ICU stay and their changes over time. The aim was to develop and evaluate the validity and reliability of a questionnaire for assessing physical and psychosocial problems over time for patients following ICU recovery. Thirty-nine patients completed the questionnaire, 17 were retested. The questionnaire was constructed in three sets: physical problems, psychosocial problems and follow-up care. Face and content validity were tested by nurses, researchers and patients. The questionnaire showed good construct validity in all three sets and had strong factor loadings (explained variance >70%, factor loadings >0.5) for all three sets. There was good concurrent validity compared with the SF 12 (r(s)>0.5). Internal consistency was shown to be reliable (Cronbach's alpha 0.70-0.85). Stability reliability on retesting was good for the physical and psychosocial sets (r(s)>0.5). The 3-set 4P questionnaire was a first step in developing an instrument for assessment of former ICU patients' problems over time. The sample size was small and thus, further studies are needed to confirm these findings.

  5. Dissociative Global and Local Task-Switching Costs Across Younger Adults, Middle-Aged Adults, Older Adults, and Very Mild Alzheimer Disease Individuals

    PubMed Central

    Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.

    2015-01-01

    A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720

  6. Development of a Social Skills Assessment Screening Scale for Psychiatric Rehabilitation Settings: A Pilot Study

    PubMed Central

    Bhola, Poornima; Basavarajappa, Chethan; Guruprasad, Deepti; Hegde, Gayatri; Khanam, Fatema; Thirthalli, Jagadisha; Chaturvedi, Santosh K.

    2016-01-01

    Context: Deficits in social skills may present in a range of psychiatric disorders, particularly in the more serious and persistent conditions, and have an influence on functioning across various domains. Aims: This pilot study aimed at developing a brief measure, for structured evaluation and screening for social skills deficits, which can be easily integrated into routine clinical practice. Settings and Design: The sample consisted of 380 inpatients and their accompanying caregivers, referred to Psychiatric Rehabilitation Services at a tertiary care government psychiatric hospital. Materials and Methods: The evaluation included an Inpatient intake Proforma and the 20-item Social Skills Assessment Screening Scale (SSASS). Disability was assessed using the Indian Disability Evaluation and Assessment Scale (IDEAS) for a subset of 94 inpatients. Statistical Analysis Used: The analysis included means and standard deviations, frequency and percentages, Cronbach's alpha to assess internal consistency, t-tests to assess differences in social skills deficits between select subgroups, and correlation between SSASS and IDEAS scores. Results: The results indicated the profile of social skills deficits assessed among the inpatients with varied psychiatric diagnoses. The “psychosis” group exhibited significantly higher deficits than the “mood disorder” group. Results indicated high internal consistency of the SSASS and adequate criterion validity demonstrated by correlations with select IDEAS domains. Modifications were made to the SSASS following the pilot study. Conclusions: The SSASS has potential value as a measure for screening and individualised intervention plans for social skills training in mental health and rehabilitation settings. The implications for future work on the psychometric properties and clinical applications are discussed. PMID:27833220

  7. Use of the Physician Orders for Life-Sustaining Treatment program for patients being discharged from the hospital to the nursing facility.

    PubMed

    Hickman, Susan E; Nelson, Christine A; Smith-Howell, Esther; Hammes, Bernard J

    2014-01-01

    The Physician Orders for Life-Sustaining Treatment (POLST) documents patient preferences as medical orders that transfer across settings with patients. The objectives were to pilot test methods and gather preliminary data about POLST including (1) use at time of hospital discharge, (2) transfers across settings, and (3) consistency with prior decisions. Descriptive with chart abstraction and interviews. Participants were hospitalized patients discharged to a nursing facility and/or their surrogates in La Crosse County, Wisconsin. POLST forms were abstracted from hospital records for 151 patients. Hospital and nursing facility chart data were abstracted and interviews were conducted with an additional 39 patients/surrogates. Overall, 176 patients had valid POLST forms at the time of discharge from the hospital, and many (38.6%; 68/176) only documented code status. When the whole POLST was completed, orders were more often marked as based on a discussion with the patient and/or surrogate than when the form was used just for code status (95.1% versus 13.8%, p<.001). In the follow-up and interview sample, a majority (90.6%; 29/32) of POLST forms written in the hospital were unchanged up to three weeks after nursing facility admission. Most (71.9%; 23/32) appeared consistent with patient or surrogate recall of prior treatment decisions. POLST forms generated in the hospital do transfer with patients across settings, but are often used only to document code status. POLST orders appeared largely consistent with prior treatment decisions. Further research is needed to assess the quality of POLST decisions.

  8. Evaluation of Wet Chemical ICP-AES Elemental Analysis Methods usingSimulated Hanford Waste Samples-Phase I Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Charles J.; Edwards, Thomas B.

    2005-04-30

    The wet chemistry digestion method development for providing process control elemental analyses of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) Melter Feed Preparation Vessel (MFPV) samples is divided into two phases: Phase I consists of: (1) optimizing digestion methods as a precursor to elemental analyses by ICP-AES techniques; (2) selecting methods with the desired analytical reliability and speed to support the nine-hour or less turnaround time requirement of the WTP; and (3) providing baseline comparison to the laser ablation (LA) sample introduction technique for ICP-AES elemental analyses that is being developed at the Savannah River National Laboratory (SRNL).more » Phase II consists of: (1) Time-and-Motion study of the selected methods from Phase I with actual Hanford waste or waste simulants in shielded cell facilities to ensure that the methods can be performed remotely and maintain the desired characteristics; and (2) digestion of glass samples prepared from actual Hanford Waste tank sludge for providing comparative results to the LA Phase II study. Based on the Phase I testing discussed in this report, a tandem digestion approach consisting of sodium peroxide fusion digestions carried out in nickel crucibles and warm mixed-acid digestions carried out in plastic bottles has been selected for Time-and-Motion study in Phase II. SRNL experience with performing this analytical approach in laboratory hoods indicates that well-trained cell operator teams will be able to perform the tandem digestions in five hours or less. The selected approach will produce two sets of solutions for analysis by ICP-AES techniques. Four hours would then be allocated for performing the ICP-AES analyses and reporting results to meet the nine-hour or less turnaround time requirement. The tandem digestion approach will need to be performed in two separate shielded analytical cells by two separate cell operator teams in order to achieve the nine-hour or less turnaround time. Because of the simplicity of the warm mixed-acid method, a well-trained cell operator team may in time be able to perform both sets of digestions. However, having separate shielded cells for each of the methods is prudent to avoid overcrowding problems that would impede a minimal turnaround time.« less

  9. Robust estimation of microbial diversity in theory and in practice

    PubMed Central

    Haegeman, Bart; Hamelin, Jérôme; Moriarty, John; Neal, Peter; Dushoff, Jonathan; Weitz, Joshua S

    2013-01-01

    Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics (‘Hill diversities'), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. PMID:23407313

  10. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  11. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  12. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting.

    PubMed

    Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.

  13. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting

    PubMed Central

    Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678

  14. Artist Material BRDF Database for Computer Graphics Rendering

    NASA Astrophysics Data System (ADS)

    Ashbaugh, Justin C.

    The primary goal of this thesis was to create a physical library of artist material samples. This collection provides necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set was produced consisting of 25 panels and containing nearly 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field. For each sample material, image data is collected and used to measure an average bi-directional reflectance distribution function (BRDF). The results are available as a public-domain image and optical database of artist materials at art-si.org. Additionally, the database includes specifications for each sample along with other information useful for computer graphics rendering such as the rectified sample images and normal maps.

  15. VizieR Online Data Catalog: Lupus YSOs X-shooter spectroscopy (Alcala+, 2017)

    NASA Astrophysics Data System (ADS)

    Alcala, J. M.; Manara, C. F.; Natta, A.; Frasca, A.; Testi, L.; Nisini, B.; Stelzer, B.; Williams, J. P.; Antoniucci, S.; Biazzo, K.; Covino, E.; Esposito, M.; Getman, F.; Rigliaco, E.

    2017-07-01

    All the data used in this paper were acquired with the X-shooter spectrograph at the VLT. The capabilities of X-shooter in terms of wide spectral coverage (310-2500nm), resolution and limiting magnitudes allow us to assess simultaneously the mass accretion and outflow, and disc diagnostics, from the UV and optical to the near IR. The sample studied in this paper consists mainly of two sets of low-mass class II YSOs in the aforementioned Lupus clouds. The first one comprises the 36 objects published in Alcala et al, (2014, Cat. J/A+A/561/A2), observed within the context of the X-shooter INAF/GTO (Alcala et al. 2011AN....332..242A) project; for simplicity we will refer to it as the "GTO sample" throughout the paper. One additional source namely Sz105, was investigated with X-shooter during the GTO, but rejected as a legitimate YSO (see below). The second sample consists of 49 objects observed during ESO periods 95 and 97 (1 April-30 September 2015 and 1 April-30 September 2016, respectively). In addition, we include here six objects observed with X-shooter in other programmes taken from the ESO archive. In total, 55 objects were newly analysed here and we will refer to them as the "new sample". (12 data files).

  16. Hair MDMA samples are consistent with reported ecstasy use: findings from a study investigating effects of ecstasy on mood and memory.

    PubMed

    Scholey, A B; Owen, L; Gates, J; Rodgers, J; Buchanan, T; Ling, J; Heffernan, T; Swan, P; Stough, C; Parrott, A C

    2011-01-01

    Our group has conducted several Internet investigations into the biobehavioural effects of self-reported recreational use of MDMA (3,4-methylenedioxymethamphetamine or Ecstasy) and other psychosocial drugs. Here we report a new study examining the relationship between self-reported Ecstasy use and traces of MDMA found in hair samples. In a laboratory setting, 49 undergraduate volunteers performed an Internet-based assessment which included mood scales and the University of East London Drug Use Questionnaire, which asks for history and current drug use. They also provided a hair sample for determination of exposure to MDMA over the previous month. Self-report of Ecstasy use and presence in hair samples were consistent (p < 0.00001). Both subjective and objective measures predicted lower self-reported ratings of happiness and higher self-reported stress. Self-reported Ecstasy use, but not presence in hair, was also associated with decreased tension. Different psychoactive drugs can influence long-term mood and cognition in complex and dynamically interactive ways. Here we have shown a good correspondence between self-report and objective assessment of exposure to MDMA. These data suggest that the Internet has potentially high utility as a useful medium to complement traditional laboratory studies into the sequelae of recreational drug use. Copyright © 2010 S. Karger AG, Basel.

  17. The relationship between observational scale and explained variance in benthic communities

    PubMed Central

    Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.

    2018-01-01

    This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746

  18. Are Health State Valuations from the General Public Biased? A Test of Health State Reference Dependency Using Self-assessed Health and an Efficient Discrete Choice Experiment.

    PubMed

    Jonker, Marcel F; Attema, Arthur E; Donkers, Bas; Stolk, Elly A; Versteegh, Matthijs M

    2017-12-01

    Health state valuations of patients and non-patients are not the same, whereas health state values obtained from general population samples are a weighted average of both. The latter constitutes an often-overlooked source of bias. This study investigates the resulting bias and tests for the impact of reference dependency on health state valuations using an efficient discrete choice experiment administered to a Dutch nationally representative sample of 788 respondents. A Bayesian discrete choice experiment design consisting of eight sets of 24 (matched pairwise) choice tasks was developed, with each set providing full identification of the included parameters. Mixed logit models were used to estimate health state preferences with respondents' own health included as an additional predictor. Our results indicate that respondents with impaired health worse than or equal to the health state levels under evaluation have approximately 30% smaller health state decrements. This confirms that reference dependency can be observed in general population samples and affirms the relevance of prospect theory in health state valuations. At the same time, the limited number of respondents with severe health impairments does not appear to bias social tariffs as obtained from general population samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Environmental swabs as a tool in norovirus outbreak investigation, including outbreaks on cruise ships.

    PubMed

    Boxman, Ingeborg L A; Dijkman, Remco; te Loeke, Nathalie A J M; Hägele, Geke; Tilburg, Jeroen J H C; Vennema, Harry; Koopmans, Marion

    2009-01-01

    In this study, we investigated whether environmental swabs can be used to demonstrate the presence of norovirus in outbreak settings. First, a procedure was set up based on viral RNA extraction using guanidium isothiocyanate buffer and binding of nucleic acids to silica. Subsequently, environmental swabs were taken at 23 Dutch restaurants and four cruise ships involved in outbreaks of gastroenteritis. Outbreaks were selected based on clinical symptoms consistent with viral gastroenteritis and time between consumption of suspected food and onset of clinical symptoms (>12 h). Norovirus RNA was demonstrated by real-time reverse transcriptase PCR in 51 of 86 (59%) clinical specimens from 12 of 14 outbreaks (86%), in 13 of 90 (14%) food specimens from 4 of 18 outbreaks (22%), and in 48 of 119 (40%) swab specimens taken from 14 of 27 outbreaks (52%). Positive swab samples agreed with positive clinical samples in seven outbreaks, showing identical sequences. Furthermore, norovirus was detected on swabs taken from kitchen and bathroom surfaces in five outbreaks in which no clinical samples were collected and two outbreaks with negative fecal samples. The detection rate was highest for outbreaks associated with catered meals and lowest for restaurant-associated outbreaks. The use of environmental swabs may be a useful tool in addition to testing of food and clinical specimens, particularlywhen viral RNA is detected on surfaces used for food preparation.

  20. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  1. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  2. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  3. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  4. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  5. The ExoMars Sample Preparation and Distribution System

    NASA Astrophysics Data System (ADS)

    Schulte, Wolfgang; Hofmann, Peter; Baglioni, Pietro; Richter, Lutz; Redlich, . Daniel; Notarnicola, Marco; Durrant, Stephen

    2012-07-01

    The Sample Preparation and Distribution System (SPDS) is a key element of the ESA ExoMars Rover. It is a set of complex mechanisms designed to receive Mars soil samples acquired from the subsurface with a drill, to crush them and to distribute the obtained soil powder to the scientific instruments of the `Pasteur Payload', in the Rover Analytical Laboratory (ALD). In particular, the SPDS consists of: (1) a Core Sample Handling System (CSHS), including a Core Sample Transportation Mechanism (CSTM) and a Blank Sample Dispenser; (2) a Crushing Station (CS); (3) a Powder Sample Dosing and Distribution System (PSDDS); and (4) a Powder Sample Handling System (PSHS) which is a carousel carrying pyrolysis ovens, a re-fillable sample container and a tool to flatten the powder sample surface. Kayser-Threde has developed, undercontract with the ExoMars prime contractor Thales Alenia Space Italy, breadboards and an engineering model of the SPDS mechanisms. Tests of individual mechanisms, namely the CSTM, CS and PSDDS were conducted both in laboratory ambient conditions and in a simulated Mars environment, using dedicated facilities. The SPDS functionalities and performances were measured and evaluated. In the course of 2011 the SPDS Dosing Station (part of the PSDDS) was also tested in simulated Mars gravity conditions during a parabolic flight campaign. By the time of the conference, an elegant breadboard of the Powder Sample Handling System will have been built and tested. The next step, planned by mid of 2012, will be a complete end-to-end test of the sample handling and processing chain, combining all four SPDS mechanisms. The possibility to verify interface and operational aspects between the SPDS and the ALD scientific instruments using the available instruments breadboards with the end-to-end set-up is currently being evaluated. This paper illustrates the most recent design status of the SPDS mechanisms, summarizes the test results and highlights future development activities, including potential involvement of the ExoMars science experiments.

  6. Detection of Planetary Transits of the Star HD 209458 in the Hipparcos Data Set.

    PubMed

    Castellano; Jenkins; Trilling; Doyle; Koch

    2000-03-20

    A search of the Hipparcos satellite photometry data for the star HD 209458 reveals evidence for a planetary transit signature consistent with the planetary properties reported by Henry et al. and Charbonneau et al. and allows further refinement of the planet's orbital period. The long time baseline (about 2926 days or 830 periods) from the best Hipparcos transit-like event to the latest transit reported by Henry et al. for the night of 1999 November 15 (UT) allows for an orbital period determination of 3.524736 days with an uncertainty of 0.000045 days (3.9 s). The transit events observed by Charbonneau et al. fall at the interim times expected to within the errors of this newly derived period. A series of statistical tests was performed to assess the likelihood of these events occurring by chance. This was crucial given the ill-conditioned problem presented by the sparse sampling of the light curve and the non-Gaussian distribution of the points. Monte Carlo simulations using bootstrap methods with the actual Hipparcos HD 209458 data set indicate that the transit-like signals of the depth observed would only be produced by chance in 21 out of 1 million trials. The transit durations and depths obtained from the Hipparcos data are also consistent with those determined by Charbonneau et al. and Henry et al. within the limitations of the sampling intervals and photometric precision of the Hipparcos data.

  7. Solid film lubricants and thermal control coatings flown aboard the EOIM-3 MDA sub-experiment

    NASA Technical Reports Server (NTRS)

    Murphy, Taylor J.; David, Kaia E.; Babel, Hank W.

    1995-01-01

    Additional experimental data were desired to support the selection of candidate thermal control coatings and solid film lubricants for the McDonnell Douglas Aerospace (MDA) Space Station hardware. The third Evaluation of Oxygen Interactions With Materials Mission (EOIM-3) flight experiment presented an opportunity to study the effects of the low Earth orbit environment on thermal control coatings and solid film lubricants. MDA provided five solid film lubricants and two anodic thermal control coatings for EOIM-3. The lubricant sample set consisted of three solid film lubricants with organic binders one solid film lubricant with an inorganic binder, and one solid film lubricant with no binder. The anodize coating sample set consisted of undyed sulfuric acid anodize and cobalt sulfide dyed sulfuric acid anodize, each on two different substrate aluminum alloys. The organic and inorganic binders in the solid film lubricants experienced erosion, and the lubricating pigments experienced oxidation. MDA is continuing to assess the effect of exposure to the low Earth orbit environment on the life and friction properties of the lubricants. Results to date support the design practice of shielding solid film lubricants from the low Earth orbit environment. Post-flight optical property analysis of the anodized specimens indicated that there were limited contamination effects and some atomic oxygen and ultraviolet radiation effects. These effects appeared to be within the values predicted by simulated ground testing and analysis of these materials, and they were different for each coating and substrate.

  8. Salivary testosterone levels in men at a U.S. sex club.

    PubMed

    Escasa, Michelle J; Casey, Jacqueline F; Gray, Peter B

    2011-10-01

    Vertebrate males commonly experience elevations in testosterone levels in response to sexual stimuli, such as presentation of a novel mating partner. Some previous human studies have shown that watching erotic movies increases testosterone levels in males although studies measuring testosterone changes during actual sexual intercourse or masturbation have yielded mixed results. Small sample sizes, "unnatural" lab-based settings, and invasive techniques may help account for mixed human findings. Here, we investigated salivary testosterone levels in men watching (n = 26) versus participating (n = 18) in sexual activity at a large U.S. sex club. The present study entailed minimally invasive sample collection (measuring testosterone in saliva), a naturalistic setting, and a larger number of subjects than previous work to test three hypotheses related to men's testosterone responses to sexual stimuli. Subjects averaged 40 years of age and participated between 11:00 pm and 2:10 am. Consistent with expectations, results revealed that testosterone levels increased 36% among men during a visit to the sex club, with the magnitude of testosterone change significantly greater among participants (72%) compared with observers (11%). Contrary to expectation, men's testosterone changes were unrelated to their age. These findings were generally consistent with vertebrate studies indicating elevated male testosterone in response to sexual stimuli, but also point out the importance of study context since participation in sexual behavior had a stronger effect on testosterone increases in this study but unlike some previous human lab-based studies.

  9. The reliability of the German version of the Richards Campbell Sleep Questionnaire.

    PubMed

    Krotsetis, Susanne; Richards, Kathy C; Behncke, Anja; Köpke, Sascha

    2017-07-01

    The assessment of sleep quality in critically ill patients is a relevant factor of high-quality care. Despite the fact that sleep disturbances and insufficient sleep management contain an increased risk of severe morbidity for these patients, a translated and applicable instrument to evaluate sleep is not available for German-speaking intensive care settings. This study aimed to translate the Richards Campbell Sleep Questionnaire (RCSQ), a simple and validated instrument eligible for measuring sleep quality in critically ill patients, and subsequently to evaluate the internal consistency of the German version of the RCSQ. Furthermore, it also aimed to inquire into the perception of sleep in a sample of critically ill patients. The RCSQ was translated following established methodological standards. Data were collected cross-sectionally in a sample of 51 patients at 3 intensive care units at a university hospital in Germany. The German version of the RCSQ showed an overall internal consistency (Cronbach's alpha) of 0·88. The mean of the RSCQ in the sample was 47·00 (SD ± 27·57). Depth of sleep was rated the lowest and falling asleep again the highest of the RCSQ sleep items. The study demonstrated very good internal consistency of the German version of the RCSQ, allowing for its application in practice and research in German-speaking countries. Quality of sleep perception was generally low in this sample, emphasizing the need for enhanced care concepts regarding the sleep management of critically ill patients. Relevance to clinical practice Assessment of self-perception of sleep is crucial in order to plan an individually tailored care process. © 2017 British Association of Critical Care Nurses.

  10. Clusternomics: Integrative context-dependent clustering for heterogeneous datasets

    PubMed Central

    Wernisch, Lorenz

    2017-01-01

    Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190

  11. Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.

    PubMed

    Gabasova, Evelina; Reid, John; Wernisch, Lorenz

    2017-10-01

    Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.

  12. Analysing the 21 cm signal from the epoch of reionization with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; Semelin, Benoit

    2017-07-01

    The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.

  13. The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.

  14. Consistency of Angoff-Based Standard-Setting Judgments: Are Item Judgments and Passing Scores Replicable across Different Panels of Experts?

    ERIC Educational Resources Information Center

    Tannenbaum, Richard J.; Kannan, Priya

    2015-01-01

    Angoff-based standard setting is widely used, especially for high-stakes licensure assessments. Nonetheless, some critics have claimed that the judgment task is too cognitively complex for panelists, whereas others have explicitly challenged the consistency in (replicability of) standard-setting outcomes. Evidence of consistency in item judgments…

  15. The geochemical landscape of northwestern Wisconsin and adjacent parts of northern Michigan and Minnesota (geochemical data files)

    USGS Publications Warehouse

    Cannon, William F.; Woodruff, Laurel G.

    2003-01-01

    This data set consists of nine files of geochemical information on various types of surficial deposits in northwestern Wisconsin and immediately adjacent parts of Michigan and Minnesota. The files are presented in two formats: as dbase files in dbaseIV form and Microsoft Excel form. The data present multi-element chemical analyses of soils, stream sediments, and lake sediments. Latitude and longitude values are provided in each file so that the dbf files can be readily imported to GIS applications. Metadata files are provided in outline form, question and answer form and text form. The metadata includes information on procedures for sample collection, sample preparation, and chemical analyses including sensitivity and precision.

  16. Acetaminophen-cysteine adducts during therapeutic dosing and following overdose

    PubMed Central

    2011-01-01

    Background Acetaminophen-cysteine adducts (APAP-CYS) are a specific biomarker of acetaminophen exposure. APAP-CYS concentrations have been described in the setting of acute overdose, and a concentration >1.1 nmol/ml has been suggested as a marker of hepatic injury from acetaminophen overdose in patients with an ALT >1000 IU/L. However, the concentrations of APAP-CYS during therapeutic dosing, in cases of acetaminophen toxicity from repeated dosing and in cases of hepatic injury from non-acetaminophen hepatotoxins have not been well characterized. The objective of this study is to describe APAP-CYS concentrations in these clinical settings as well as to further characterize the concentrations observed following acetaminophen overdose. Methods Samples were collected during three clinical trials in which subjects received 4 g/day of acetaminophen and during an observational study of acetaminophen overdose patients. Trial 1 consisted of non-drinkers who received APAP for 10 days, Trial 2 consisted of moderate drinkers dosed for 10 days and Trial 3 included subjects who chronically abuse alcohol dosed for 5 days. Patients in the observational study were categorized by type of acetaminophen exposure (single or repeated). Serum APAP-CYS was measured using high pressure liquid chromatography with electrochemical detection. Results Trial 1 included 144 samples from 24 subjects; Trial 2 included 182 samples from 91 subjects and Trial 3 included 200 samples from 40 subjects. In addition, we collected samples from 19 subjects with acute acetaminophen ingestion, 7 subjects with repeated acetaminophen exposure and 4 subjects who ingested another hepatotoxin. The mean (SD) peak APAP-CYS concentrations for the Trials were: Trial 1- 0.4 (0.20) nmol/ml, Trial 2- 0.1 (0.09) nmol/ml and Trial 3- 0.3 (0.12) nmol/ml. APAP-CYS concentrations varied substantially among the patients with acetaminophen toxicity (0.10 to 27.3 nmol/ml). No subject had detectable APAP-CYS following exposure to a non-acetaminophen hepatotoxin. Conclusions Lower concentrations of APAP-CYS are detectable after exposure to therapeutic doses of acetaminophen and higher concentrations are detected after acute acetaminophen overdose and in patients with acetaminophen toxicity following repeated exposure. PMID:21401949

  17. Improved Dark Energy Constraints From ~ 100 New CfA Supernova Type Ia Light Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicken, Malcolm; /Harvard-Smithsonian Ctr. Astrophys. /Harvard U.; Wood-Vasey, W.Michael

    2012-04-06

    We combine the CfA3 supernovae Type Ia (SN Ia) sample with samples from the literature to calculate improved constraints on the dark energy equation of state parameter, w. The CfA3 sample is added to the Union set of Kowalski et al. to form the Constitution set and, combined with a BAO prior, produces 1 + w = 0.013{sub -0.068}{sup +0.066} (0.11 syst), consistent with the cosmological constant. The CfA3 addition makes the cosmologically useful sample of nearby SN Ia between 2.6 and 2.9 times larger than before, reducing the statistical uncertainty to the point where systematics play the largest role.more » We use four light-curve fitters to test for systematic differences: SALT, SALT2, MLCS2k2 (R{sub V} = 3.1), and MLCS2k2 (R{sub V} = 1.7). SALT produces high-redshift Hubble residuals with systematic trends versus color and larger scatter than MLCS2k2. MLCS2k2 overestimates the intrinsic luminosity of SN Ia with 0.7 < {Delta} < 1.2. MLCS2k2 with R{sub V} = 3.1 overestimates host-galaxy extinction while R{sub V} {approx} 1.7 does not. Our investigation is consistent with no Hubble bubble. We also find that, after light-curve correction, SN Ia in Scd/Sd/Irr hosts are intrinsically fainter than those in E/S0 hosts by 2{sigma}, suggesting that they may come from different populations. We also find that SN Ia in Scd/Sd/Irr hosts have low scatter (0.1 mag) and reddening. Current systematic errors can be reduced by improving SN Ia photometric accuracy, by including the CfA3 sample to retrain light-curve fitters, by combining optical SN Ia photometry with near-infrared photometry to understand host-galaxy extinction, and by determining if different environments give rise to different intrinsic SN Ia luminosity after correction for light-curve shape and color.« less

  18. IMPROVED DARK ENERGY CONSTRAINTS FROM {approx}100 NEW CfA SUPERNOVA TYPE Ia LIGHT CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicken, Malcolm; Challis, Peter; Kirshner, Robert P.

    2009-08-01

    We combine the CfA3 supernovae Type Ia (SN Ia) sample with samples from the literature to calculate improved constraints on the dark energy equation of state parameter, w. The CfA3 sample is added to the Union set of Kowalski et al. to form the Constitution set and, combined with a BAO prior, produces 1 + w = 0.013{sup +0.066} {sub -0.068} (0.11 syst), consistent with the cosmological constant. The CfA3 addition makes the cosmologically useful sample of nearby SN Ia between 2.6 and 2.9 times larger than before, reducing the statistical uncertainty to the point where systematics play the largestmore » role. We use four light-curve fitters to test for systematic differences: SALT, SALT2, MLCS2k2 (R{sub V} = 3.1), and MLCS2k2 (R{sub V} = 1.7). SALT produces high-redshift Hubble residuals with systematic trends versus color and larger scatter than MLCS2k2. MLCS2k2 overestimates the intrinsic luminosity of SN Ia with 0.7 < {delta} < 1.2. MLCS2k2 with R{sub V} = 3.1 overestimates host-galaxy extinction while R{sub V} {approx} 1.7 does not. Our investigation is consistent with no Hubble bubble. We also find that, after light-curve correction, SN Ia in Scd/Sd/Irr hosts are intrinsically fainter than those in E/S0 hosts by 2{sigma}, suggesting that they may come from different populations. We also find that SN Ia in Scd/Sd/Irr hosts have low scatter (0.1 mag) and reddening. Current systematic errors can be reduced by improving SN Ia photometric accuracy, by including the CfA3 sample to retrain light-curve fitters, by combining optical SN Ia photometry with near-infrared photometry to understand host-galaxy extinction, and by determining if different environments give rise to different intrinsic SN Ia luminosity after correction for light-curve shape and color.« less

  19. Cross-national Consistency in the Relationship Between Bullying Behaviors and Psychosocial Adjustment

    PubMed Central

    Nansel, Tonja R.; Craig, Wendy; Overpeck, Mary D.; Saluja, Gitanjali; Ruan, W. June

    2008-01-01

    Objective To determine whether the relationship between bullying and psychosocial adjustment is consistent across countries by standard measures and methods. Design Cross-sectional self-report surveys were obtained from nationally representative samples of students in 25 countries. Involvement in bullying, as bully, victim, or both bully and victim, was assessed. Setting Surveys were conducted at public and private schools throughout the participating countries. Participants Participants included all consenting students in sampled classrooms, for a total of 113200 students at average ages of 11.5, 13.5, and 15.5 years. Main Outcome Measures Psychosocial adjustment dimensions assessed included health problems, emotional adjustment, school adjustment, relationships with classmates, alcohol use, and weapon carrying. Results Involvement in bullying varied dramatically across countries, ranging from 9% to 54% of youth. However, across all countries, involvement in bullying was associated with poorer psychosocial adjustment (P<.05). In all or nearly all countries, bullies, victims, and bully-victims reported greater health problems and poorer emotional and social adjustment. Victims and bully-victims consistently reported poorer relationships with classmates, whereas bullies and bully-victims reported greater alcohol use and weapon carrying. Conclusions The association of bullying with poorer psychosocial adjustment is remarkably similar across countries. Bullying is a critical issue for the health of youth internationally. PMID:15289243

  20. Consistency of biological networks inferred from microarray and sequencing data.

    PubMed

    Vinciotti, Veronica; Wit, Ernst C; Jansen, Rick; de Geus, Eco J C N; Penninx, Brenda W J H; Boomsma, Dorret I; 't Hoen, Peter A C

    2016-06-24

    Sparse Gaussian graphical models are popular for inferring biological networks, such as gene regulatory networks. In this paper, we investigate the consistency of these models across different data platforms, such as microarray and next generation sequencing, on the basis of a rich dataset containing samples that are profiled under both techniques as well as a large set of independent samples. Our analysis shows that individual node variances can have a remarkable effect on the connectivity of the resulting network. Their inconsistency across platforms and the fact that the variability level of a node may not be linked to its regulatory role mean that, failing to scale the data prior to the network analysis, leads to networks that are not reproducible across different platforms and that may be misleading. Moreover, we show how the reproducibility of networks across different platforms is significantly higher if networks are summarised in terms of enrichment amongst functional groups of interest, such as pathways, rather than at the level of individual edges. Careful pre-processing of transcriptional data and summaries of networks beyond individual edges can improve the consistency of network inference across platforms. However, caution is needed at this stage in the (over)interpretation of gene regulatory networks inferred from biological data.

  1. Salivary characteristics and dental caries: Evidence from general dental practices

    PubMed Central

    Cunha-Cruz, Joana; Scott, JoAnna; Rothen, Marilynn; Mancl, Lloyd; Lawhorn, Timothy; Brossel, Kenneth; Berg, Joel

    2013-01-01

    Background Saliva is one of the intraoral host factors that influence caries development. The authors conducted a study to investigate whether salivary characteristics are associated with recent dental caries experience. Methods Dentist-investigators and dental staff members collected data pertaining to a two-year cumulative incidence of dental caries (previous 24 months) and salivary characteristics during baseline assessment in an ongoing longitudinal study. The systematic random sample consisted of patients (n = 1,763) visiting general dental practices (n = 63) within the Northwest Practice-based REsearch Collaborative in Evidence-based DENTistry (PRECEDENT). The authors estimated adjusted rate ratios (RRs) by using generalized estimating equations log-linear regression to relate salivary characteristics to coronal carious lesions into dentin. Results Low resting pH (≤ 6.0) in the overall sample and low stimulated salivary flow rate (≤ 0.6 milliliter/minute) in older adults (≥ 65 years old) were associated with increased dental caries (RR, 1.6; 95 percent confidence interval [CI], 1.1–2.2; RR, 2.4; 95 percent CI, 1.5–3.8, respectively). Low buffering capacity was associated with decreased dental caries in children and adolescents (RR, 0.3; 95 percent CI, 0.1–1.0; RR, 0.2; 95 percent CI, 0.1–0.7, respectively). A thick, sticky or frothy salivary consistency also was associated with decreased dental caries in adults (RR, 0.6; 95 percent CI, 0.4–1.0). Associations between other salivary characteristics and dental caries for the overall sample and within each age group were not statistically significant. Conclusions Salivary characteristics were associated weakly with previous dental caries experience, but the authors did not find consistent trends among the three age groups. Different salivary characteristics were associated with an increased caries experience in older adults and a lowered caries experience in children and adolescents and adults. Practical Implications Further investigations are needed in this population setting to understand the study’s conflicting results. The study findings cannot support the use of salivary tests to determine caries risk in actual clinical settings. PMID:23633704

  2. Trends in groundwater quality in principal aquifers of the United States, 1988-2012

    USGS Publications Warehouse

    Lindsey, Bruce D.; Rupert, Michael G.

    2014-01-01

    The U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program analyzed trends in groundwater quality throughout the nation for the sampling period of 1988-2012. Trends were determined for networks (sets of wells routinely monitored by the USGS) for a subset of constituents by statistical analysis of paired water-quality measurements collected on a near-decadal time scale. The data set for chloride, dissolved solids, and nitrate consisted of 1,511 wells in 67 networks, whereas the data set for methyl tert-butyl ether (MTBE) consisted of 1, 013 wells in 46 networks. The 25 principal aquifers represented by these networks account for about 75 percent of withdrawals of groundwater used for drinking-water supply for the nation. Statistically significant changes in chloride, dissolved-solids, or nitrate concentrations were found in many well networks over a decadal period. Concentrations increased significantly in 48 percent of networks for chloride, 42 percent of networks for dissolved solids, and 21 percent of networks for nitrate. Chloride, dissolved solids, and nitrate concentrations decreased significantly in 3, 3, and 10 percent of the networks, respectively. The magnitude of change in concentrations was typically small in most networks; however, the magnitude of change in networks with statistically significant increases was typically much larger than the magnitude of change in networks with statistically significant decreases. The largest increases of chloride concentrations were in urban areas in the northeastern and north central United States. The largest increases of nitrate concentrations were in networks in agricultural areas. Statistical analysis showed 42 or the 46 networks had no statistically significant changes in MTBE concentrations. The four networks with statistically significant changes in MTBE concentrations were in the northeastern United States, where MTBE was widely used. Two networks had increasing concentrations, and two networks had decreasing concentrations. Production and use of MTBE peaked in about 2000 and has been effectively banned in many areas since about 2006. The two networks that had increasing concentrations were sampled for the second time close to the peak of MTBE production, whereas the two networks that had decreasing concentrations were sampled for the second time 10 years after the peak of MTBE production.

  3. Global High Resolution Mineral Maps Of The Moon Using Data From the Kaguya Multiband Imager and LRO Diviner Lunar Radiometer

    NASA Astrophysics Data System (ADS)

    Lucey, P. G.; Lemelin, M.; Ohtake, M.; Gaddis, L. R.; Greenhagen, B. T.; Yamamoto, S.; Hare, T. M.; Taylor, J.; Martel, L.; Norman, J.

    2016-12-01

    We combine visible and near-IR multispectral data from the Kaguya Multiband Imager (MI) with thermal infrared multispectral data from the LRO Diviner Lunar Radiometer Experiment to produce global mineral abundance data at 60-m resolution. The base data set applies a radiative transfer mixing model to the Kaguya MI data to produce global maps of plagioclase, low-Ca pyroxene, high-Ca pyroxene and olivine. Diviner thermal multispectral data are highly sensitive to the ratio of plagioclase to mafic minerals and provide independent data to both validate and improve confidence in the derived mineral abundances. The data set is validated using a new set of mineral abundances derived for lunar soils from all lunar sampling sites resolvable using MI data. Modal abundances are derived using X-ray diffraction patterns analyzed with quantitative Rietveldt analysis. Modal abundances were derived from 124 soils from 47 individual Apollo sampling stations. Some individual soil locations within sampling stations can be resolved increasing the total number of resolved locations to 56. With quantitative mineral abundances we can examine the distribution of classically defined lunar rock types in unprecedented detail. In the Feldspathic Highlands Terrane (FHT) the crust is dominated in surface area by noritic anorthosite consistent with a highly mixed composition. Classically defined anorthosite is widespread in the FHT, but much less abundant than the mafic anorthosites. The Procellarum KREEP Terrane and the South Pole Aitken Basin are more noritic than the FHT as previously recognized with abundant norite exposed. While dunite is not found, varieties of troctolitic rocks are widespread in basin rings, especially Crisium, Humorum and Moscoviense, and also occur in the core of the FHT. Only troctolites and anorthosites appear consistently concentrated in basin rings. We have barely scratched the surface of the full resolution data, but have completed an inventory of rock types on basin rings and find in most cases they are dominated by mixed anorthositic rocks similar to the rest of the crust suggesting the rings may be partly mantled by background noritic anorthosite. The major exception is Orientale with its highly anorthositic inner ring.

  4. A novel atmospheric tritium sampling system

    NASA Astrophysics Data System (ADS)

    Qin, Lailai; Xia, Zhenghai; Gu, Shaozhong; Zhang, Dongxun; Bao, Guangliang; Han, Xingbo; Ma, Yuhua; Deng, Ke; Liu, Jiayu; Zhang, Qin; Ma, Zhaowei; Yang, Guo; Liu, Wei; Liu, Guimin

    2018-06-01

    The health hazard of tritium is related to its chemical form. Sampling different chemical forms of tritium simultaneously becomes significant. Here a novel atmospheric tritium sampling system (TS-212) was developed to collect the tritiated water (HTO), tritiated hydrogen (HT) and tritiated methane (CH3T) simultaneously. It consisted of an air inlet system, three parallel connected sampling channels, a hydrogen supply module, a methane supply module and a remote control system. It worked at air flow rate of 1 L/min to 5 L/min, with temperature of catalyst furnace at 200 °C for HT sampling and 400 °C for CH3T sampling. Conversion rates of both HT and CH3T to HTO were larger than 99%. The collecting efficiency of the two-stage trap sets for HTO was larger than 96% in 12 h working-time without being blocked. Therefore, the collected efficiencies of TS-212 are larger than 95% for tritium with different chemical forms in environment. Besides, the remote control system made sampling more intelligent, reducing the operator's work intensity. Based on the performance parameters described above, the TS-212 can be used to sample atmospheric tritium in different chemical forms.

  5. Automated protein identification by the combination of MALDI MS and MS/MS spectra from different instruments.

    PubMed

    Levander, Fredrik; James, Peter

    2005-01-01

    The identification of proteins separated on two-dimensional gels is most commonly performed by trypsin digestion and subsequent matrix-assisted laser desorption ionization (MALDI) with time-of-flight (TOF). Recently, atmospheric pressure (AP) MALDI coupled to an ion trap (IT) has emerged as a convenient method to obtain tandem mass spectra (MS/MS) from samples on MALDI target plates. In the present work, we investigated the feasibility of using the two methodologies in line as a standard method for protein identification. In this setup, the high mass accuracy MALDI-TOF spectra are used to calibrate the peptide precursor masses in the lower mass accuracy AP-MALDI-IT MS/MS spectra. Several software tools were developed to automate the analysis process. Two sets of MALDI samples, consisting of 142 and 421 gel spots, respectively, were analyzed in a highly automated manner. In the first set, the protein identification rate increased from 61% for MALDI-TOF only to 85% for MALDI-TOF combined with AP-MALDI-IT. In the second data set the increase in protein identification rate was from 44% to 58%. AP-MALDI-IT MS/MS spectra were in general less effective than the MALDI-TOF spectra for protein identification, but the combination of the two methods clearly enhanced the confidence in protein identification.

  6. Makeup and uses of a basic magnet laboratory for characterizing high-temperature permanent magnets

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.; Schwarze, Gene E.

    1991-01-01

    A set of instrumentation for making basic magnetic measurements was assembled in order to characterize high intrinsic coercivity, rare earth permanent magnets with respect to short term demagnetization resistance and long term aging at temperatures up to 300 C. The major specialized components of this set consist of a 13 T peak field, capacitor discharge pulse magnetizer; a 10 in. pole size, variable gap electromagnet; a temperature controlled oven equipped with iron cobalt pole piece extensions and a removable paddle that carries the magnetization and field sensing coils; associated electronic integrators; and sensor standards for field intensity H and magnetic moment M calibration. A 1 cm cubic magnet sample, carried by the paddle, fits snugly between the pole piece extensions within the electrically heated aluminum oven, where fields up to 3.2 T can be applied by the electromagnet at temperatures up to 300 C. A sample set of demagnetization data for the high energy Sm2Co17 type of magnet is given for temperatures up to 300 C. These data are reduced to the temperature dependence of the M-H knee field and of the field for a given magnetic induction swing, and they are interpreted to show the limits of safe operation.

  7. Determination of longevities, chamber building rates and growth functions for Operculina complanata from long term cultivation experiments

    NASA Astrophysics Data System (ADS)

    Woeger, Julia; Kinoshita, Shunichi; Wolfgang, Eder; Briguglio, Antonino; Hohenegger, Johann

    2016-04-01

    Operculina complanata was collected in 20 and 50 m depth around the Island of Sesoko belonging to Japans southernmost prefecture Okinawa in a series of monthly sampling over a period of 16 months (Apr.2014-July2015). A minimum of 8 specimens (4 among the smallest and 4 among the largest) per sampling were cultured in a long term experiment that was set up to approximate conditions in the field as closely as possible. A set up allowing recognition of individual specimens enabled consistent documentation of chamber formation, which in combination with μ-CT-scanning after the investigation period permitted the assignment of growth steps to specific time periods. These data were used to fit various mathematical models to describe growth (exponential-, logistic-, generalized logistic-, Gompertz-function) and chamber building rate (Michaelis-Menten-, Bertalanffy- function) of Operculina complanata. The mathematically retrieved maximum lifespan and mean chamber building rate found in cultured Operculina complanata were further compared to first results obtained by the simultaneously conducted "natural laboratory approach". Even though these comparisons hint at a somewhat stunted growth and truncated life spans of Operculina complanata in culture, they represent a possibility to assess and improve the quality of further cultivation set ups, opening new prospects to a better understanding of the their theoretical niches.

  8. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  9. PERSONNEL NEUTRON DOSIMETER

    DOEpatents

    Fitzgerald, J.J.; Detwiler, C.G. Jr.

    1960-05-24

    A description is given of a personnel neutron dosimeter capable of indicating the complete spectrum of the neutron dose received as well as the dose for each neutron energy range therein. The device consists of three sets of indium foils supported in an aluminum case. The first set consists of three foils of indium, the second set consists of a similar set of indium foils sandwiched between layers of cadmium, whereas the third set is similar to the second set but is sandwiched between layers of polyethylene. By analysis of all the foils the neutron spectrum and the total dose from neutrons of all energy levels can be ascertained.

  10. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  11. STBase: one million species trees for comparative biology.

    PubMed

    McMahon, Michelle M; Deepak, Akshay; Fernández-Baca, David; Boss, Darren; Sanderson, Michael J

    2015-01-01

    Comprehensively sampled phylogenetic trees provide the most compelling foundations for strong inferences in comparative evolutionary biology. Mismatches are common, however, between the taxa for which comparative data are available and the taxa sampled by published phylogenetic analyses. Moreover, many published phylogenies are gene trees, which cannot always be adapted immediately for species level comparisons because of discordance, gene duplication, and other confounding biological processes. A new database, STBase, lets comparative biologists quickly retrieve species level phylogenetic hypotheses in response to a query list of species names. The database consists of 1 million single- and multi-locus data sets, each with a confidence set of 1000 putative species trees, computed from GenBank sequence data for 413,000 eukaryotic taxa. Two bodies of theoretical work are leveraged to aid in the assembly of multi-locus concatenated data sets for species tree construction. First, multiply labeled gene trees are pruned to conflict-free singly-labeled species-level trees that can be combined between loci. Second, impacts of missing data in multi-locus data sets are ameliorated by assembling only decisive data sets. Data sets overlapping with the user's query are ranked using a scheme that depends on user-provided weights for tree quality and for taxonomic overlap of the tree with the query. Retrieval times are independent of the size of the database, typically a few seconds. Tree quality is assessed by a real-time evaluation of bootstrap support on just the overlapping subtree. Associated sequence alignments, tree files and metadata can be downloaded for subsequent analysis. STBase provides a tool for comparative biologists interested in exploiting the most relevant sequence data available for the taxa of interest. It may also serve as a prototype for future species tree oriented databases and as a resource for assembly of larger species phylogenies from precomputed trees.

  12. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    NASA Astrophysics Data System (ADS)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  14. Testing whether the DSM-5 personality disorder trait model can be measured with a reduced set of items: An item response theory investigation of the Personality Inventory for DSM-5.

    PubMed

    Maples, Jessica L; Carter, Nathan T; Few, Lauren R; Crego, Cristina; Gore, Whitney L; Samuel, Douglas B; Williamson, Rachel L; Lynam, Donald R; Widiger, Thomas A; Markon, Kristian E; Krueger, Robert F; Miller, Joshua D

    2015-12-01

    The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) includes an alternative model of personality disorders (PDs) in Section III, consisting in part of a pathological personality trait model. To date, the 220-item Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012) is the only extant self-report instrument explicitly developed to measure this pathological trait model. The present study used item response theory-based analyses in a large sample (n = 1,417) to investigate whether a reduced set of 100 items could be identified from the PID-5 that could measure the 25 traits and 5 domains. This reduced set of PID-5 items was then tested in a community sample of adults currently receiving psychological treatment (n = 109). Across a wide range of criterion variables including NEO PI-R domains and facets, DSM-5 Section II PD scores, and externalizing and internalizing outcomes, the correlational profiles of the original and reduced versions of the PID-5 were nearly identical (rICC = .995). These results provide strong support for the hypothesis that an abbreviated set of PID-5 items can be used to reliably, validly, and efficiently assess these personality disorder traits. The ability to assess the DSM-5 Section III traits using only 100 items has important implications in that it suggests these traits could still be measured in settings in which assessment-related resources (e.g., time, compensation) are limited. (c) 2015 APA, all rights reserved).

  15. The redshift distribution of cosmological samples: a forward modeling approach

    NASA Astrophysics Data System (ADS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  16. Surface Sampling Collection and Culture Methods for Escherichia coli in Household Environments with High Fecal Contamination

    PubMed Central

    Kosek, Margaret N.; Schwab, Kellogg J.

    2017-01-01

    Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 (p < 0.0001) and 0.91 (p < 0.0001) for cement surfaces) and moderate agreement for results between entrance and kitchen samples (Pearson (0.53, p < 0.0001) and weighted Kappa statistic (0.54, p < 0.0001)). Our findings suggest that this method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials. PMID:28829392

  17. Surface Sampling Collection and Culture Methods for Escherichia coli in Household Environments with High Fecal Contamination.

    PubMed

    Exum, Natalie G; Kosek, Margaret N; Davis, Meghan F; Schwab, Kellogg J

    2017-08-22

    Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 ( p < 0.0001) and 0.91 ( p < 0.0001) for cement surfaces) and moderate agreement for results between entrance and kitchen samples (Pearson (0.53, p < 0.0001) and weighted Kappa statistic (0.54, p < 0.0001)). Our findings suggest that this method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials.

  18. The redshift distribution of cosmological samples: a forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less

  19. Perceptions of clients on awareness and the geographical location of a South African university sexual health clinic

    PubMed Central

    2017-01-01

    Background The Campus Health Service at Stellenbosch University has a sub-division, a sexual health clinic, which provides sexual health services. The clients of the sexual health clinic consist of staff members and students. Aim This article reports on the perceptions of clients that relate to awareness and the geographical location of the clinic. Setting The Campus Health Service at Stellenbosch University’s main campus. Method A descriptive qualitative approach was applied utilising in-depth interviews. A sample of n = 15 was drawn through purposive sampling and data saturation was achieved with the sample. Results The following themes emerged from the data: location of the clinic, awareness of sexual health services and marketing and advertising. Conclusion The findings of the study revealed that accessibility of the clinic is influenced by the geographical location of the clinic and that marketing and awareness of services require attention. PMID:29041801

  20. Effective atomic numbers in some food materials and medicines for γ -ray attenuation using ^{137}Cs γ -ray

    NASA Astrophysics Data System (ADS)

    Revathy, J. S.; Anooja, J.; Krishnaveni, R. B.; Gangadathan, M. P.; Varier, K. M.

    2018-06-01

    A light-weight multichannel analyser (MCA)-based γ -ray spectrometer, developed earlier at the Inter University Accelerator Centre, New Delhi, has been used as part of the PG curriculum, to determine the effective atomic numbers for γ attenuation of ^{137}Cs γ -ray in different types of samples. The samples used are mixtures of graphite, aluminum and selenium powders in different proportions, commercial and home-made edible powders, fruit and vegetable juices as well as certain allopathic and ayurvedic medications. A narrow beam good geometry set-up has been used in the experiments. The measured attenuation coefficients have been used to extract effective atomic numbers in the samples. The results are consistent with XCOM values wherever available. The present results suggest that the γ attenuation technique can be used as an effective non-destructive method for finding adulteration of food materials.

  1. Water-quality assessment of part of the Upper Mississippi River basin, Minnesota and Wisconsin, environmental setting and study design

    USGS Publications Warehouse

    Stark, J.R.; Andrews, W.J.; Fallon, J.D.; Fong, A.L.; Goldstein, R.M.; Hanson, P.E.; Kroening, S.E.; Lee, K.E.

    1996-01-01

    Environmental stratification consists of dividing the study unit into subareas with homogeneous characteristics to assess natural and anthropogenic factors affecting water quality. The assessment of water quality in streams and in aquifers is based on the sampling design that compares water quality within homogeneous subareas defined by subbasins or aquifer boundaries. The study unit is stratified at four levels for the surface-water component: glacial deposit composition, surficial geology, general land use and land cover, and secondary land use. Ground-water studies emphasize shallow ground water where quality is most likely influenced by overlying land use and land cover. Stratification for ground-water sampling is superimposed on the distribution of shallow aquifers. For each aquifer and surface-water basin this stratification forms the basis for the proposed sampling design used in the Upper Mississippi River Basin National Water-Quality Assessment.

  2. Decoding memory features from hippocampal spiking activities using sparse classification models.

    PubMed

    Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W

    2016-08-01

    To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.

  3. Near-infrared diffuse reflection systems for chlorophyll content of tomato leaves measurement

    NASA Astrophysics Data System (ADS)

    Jiang, Huanyu; Ying, Yibin; Lu, Huishan

    2006-10-01

    In this study, two measuring systems for chlorophyll content of tomato leaves were developed based on near-infrared spectral techniques. The systems mainly consists of a FT-IR spectrum analyzer, optic fiber diffuses reflection accessories and data card. Diffuse reflectance of intact tomato leaves was measured by an optics fiber optic fiber diffuses reflection accessory and a smart diffuses reflection accessory. Calibration models were developed from spectral and constituent measurements. 90 samples served as the calibration sets and 30 samples served as the validation sets. Partial least squares (PLS) and principal component regression (PCR) technique were used to develop the prediction models by different data preprocessing. The best model for chlorophyll content had a high correlation efficient of 0.9348 and a low standard error of prediction RMSEP of 4.79 when we select full range (12500-4000 cm -1), MSC path length correction method by the log(1/R). The results of this study suggest that FT-NIR method can be feasible to detect chlorophyll content of tomato leaves rapidly and nondestructively.

  4. Planetary image conversion task

    NASA Technical Reports Server (NTRS)

    Martin, M. D.; Stanley, C. L.; Laughlin, G.

    1985-01-01

    The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.

  5. Comparative analysis of monocytic and granulocytic myeloid-derived suppressor cell subsets in patients with gastrointestinal malignancies.

    PubMed

    Duffy, Austin; Zhao, Fei; Haile, Lydia; Gamrekelashvili, Jaba; Fioravanti, Suzanne; Ma, Chi; Kapanadze, Tamar; Compton, Kathryn; Figg, William D; Greten, Tim F

    2013-02-01

    Myeloid-derived suppressor cells (MDSC) are a heterogenous population of cells comprising myeloid progenitor cells and immature myeloid cells, which have the ability to suppress the effector immune response. In humans, MDSC have not been well characterized owing to the lack of specific markers, although it is possible to broadly classify the MDSC phenotypes described in the literature as being predominantly granulocytic (expressing markers such as CD15, CD66, CD33) or monocytic (expressing CD14). In this study, we set out to perform a direct comparative analysis across both granulocytic and monocytic MDSC subsets in terms of their frequency, absolute number, and function in the peripheral blood of patients with advanced GI cancer. We also set out to determine the optimal method of sample processing given that this is an additional source of heterogeneity. Our findings demonstrate consistent changes across sample processing methods for monocytic MDSC, suggesting that reliance upon cryopreserved PBMC is acceptable. Although we did not see an increase in the population of granulocytic MDSC, these cells were found to be more suppressive than their monocytic counterparts.

  6. Dose relations between goal setting, theory-based correlates of goal setting and increases in physical activity during a workplace trial.

    PubMed

    Dishman, Rod K; Vandenberg, Robert J; Motl, Robert W; Wilson, Mark G; DeJoy, David M

    2010-08-01

    The effectiveness of an intervention depends on its dose and on moderators of dose, which usually are not studied. The purpose of the study is to determine whether goal setting and theory-based moderators of goal setting had dose relations with increases in goal-related physical activity during a successful workplace intervention. A group-randomized 12-week intervention that included personal goal setting was implemented in fall 2005, with a multiracial/ethnic sample of employees at 16 geographically diverse worksites. Here, we examined dose-related variables in the cohort of participants (N = 664) from the 8 worksites randomized to the intervention. Participants in the intervention exceeded 9000 daily pedometer steps and 300 weekly minutes of moderate-to-vigorous physical activity (MVPA) during the last 6 weeks of the study, which approximated or exceeded current public health guidelines. Linear growth modeling indicated that participants who set higher goals and sustained higher levels of self-efficacy, commitment and intention about attaining their goals had greater increases in pedometer steps and MVPA. The relation between change in participants' satisfaction with current physical activity and increases in physical activity was mediated by increases in self-set goals. The results show a dose relation of increased physical activity with changes in goal setting, satisfaction, self-efficacy, commitment and intention, consistent with goal-setting theory.

  7. Assessment of the hygienic performances of hamburger patty production processes.

    PubMed

    Gill, C O; Rahn, K; Sloan, K; McMullen, L M

    1997-05-20

    The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.

  8. Dynamic variable selection in SNP genotype autocalling from APEX microarray data.

    PubMed

    Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J

    2006-11-30

    Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.

  9. Representations of time coordinates in FITS. Time and relative dimension in space

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Bunclark, Peter S.; Calabretta, Mark R.; Allen, Steven L.; Manchester, Richard N.; Thompson, William T.

    2015-02-01

    Context. In a series of three previous papers, formulation and specifics of the representation of world coordinate transformations in FITS data have been presented. This fourth paper deals with encoding time. Aims: Time on all scales and precisions known in astronomical datasets is to be described in an unambiguous, complete, and self-consistent manner. Methods: Employing the well-established World Coordinate System (WCS) framework, and maintaining compatibility with the FITS conventions that are currently in use to specify time, the standard is extended to describe rigorously the time coordinate. Results: World coordinate functions are defined for temporal axes sampled linearly and as specified by a lookup table. The resulting standard is consistent with the existing FITS WCS standards and specifies a metadata set that achieves the aims enunciated above.

  10. The geologic setting of the Luna 16 landing site

    USGS Publications Warehouse

    McCauley, J.F.; Scott, D.H.

    1972-01-01

    The Luna 16 landing site is similar in its geologic setting to Apollos 11 and 12. All three sites are located on basaltic mare fill which occurs mostly within multi-ring basins formed by impact earlier in the moon's history. A regolith developed by impact bombardment is present at each of these sites. The regolith is composed mostly of locally derived volcanic material, but also contains exotic fine fragments that have been ballistically transported into the landing sites by large impact events which formed craters such as Langrenus and Copernicus. These exotic fragments probably consist mostly of earlier reworked multi-ring basin debris and, although not directly traceable to individual sources, they do represent a good statistical sample of the composition of most of the premare terrac regions. ?? 1972.

  11. Deep, Staged Transcriptomic Resources for the Novel Coleopteran Models Atrachya menetriesi and Callosobruchus maculatus.

    PubMed

    Benton, Matthew A; Kenny, Nathan J; Conrads, Kai H; Roth, Siegfried; Lynch, Jeremy A

    2016-01-01

    Despite recent efforts to sample broadly across metazoan and insect diversity, current sequence resources in the Coleoptera do not adequately describe the diversity of the clade. Here we present deep, staged transcriptomic data for two coleopteran species, Atrachya menetriesi (Faldermann 1835) and Callosobruchus maculatus (Fabricius 1775). Our sampling covered key stages in ovary and early embryonic development in each species. We utilized this data to build combined assemblies for each species which were then analysed in detail. The combined A. menetriesi assembly consists of 228,096 contigs with an N50 of 1,598 bp, while the combined C. maculatus assembly consists of 128,837 contigs with an N50 of 2,263 bp. For these assemblies, 34.6% and 32.4% of contigs were identified using Blast2GO, and 97% and 98.3% of the BUSCO set of metazoan orthologs were present, respectively. We also carried out manual annotation of developmental signalling pathways and found that nearly all expected genes were present in each transcriptome. Our analyses show that both transcriptomes are of high quality. Lastly, we performed read mapping utilising our timed, stage specific RNA samples to identify differentially expressed contigs. The resources presented here will provide a firm basis for a variety of experimentation, both in developmental biology and in comparative genomic studies.

  12. [Development of a software standardizing optical density with operation settings related to several limitations].

    PubMed

    Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei

    2012-12-01

    To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.

  13. Refining and validating the Social Interaction Anxiety Scale and the Social Phobia Scale.

    PubMed

    Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G; McCabe, Randi E; Rowa, Karen; Antony, Martin M

    2009-01-01

    The Social Interaction Anxiety Scale and Social Phobia Scale are companion measures for assessing symptoms of social anxiety and social phobia. The scales have good reliability and validity across several samples, however, exploratory and confirmatory factor analyses have yielded solutions comprising substantially different item content and factor structures. These discrepancies are likely the result of analyzing items from each scale separately or simultaneously. The current investigation sets out to assess items from those scales, both simultaneously and separately, using exploratory and confirmatory factor analyses in an effort to resolve the factor structure. Participants consisted of a clinical sample (n 5353; 54% women) and an undergraduate sample (n 5317; 75% women) who completed the Social Interaction Anxiety Scale and Social Phobia Scale, along with additional fear-related measures to assess convergent and discriminant validity. A three-factor solution with a reduced set of items was found to be most stable, irrespective of whether the items from each scale are assessed together or separately. Items from the Social Interaction Anxiety Scale represented one factor, whereas items from the Social Phobia Scale represented two other factors. Initial support for scale and factor validity, along with implications and recommendations for future research, is provided. (c) 2009 Wiley-Liss, Inc.

  14. Effect of short chain inulin on the rheological and sensory characteristics of reduced fat set coconut milk yoghurt.

    PubMed

    Adegoke, Samuel Chetachukwu; Thongraung, Chakree; Yupanqui, Chutha Takahashi

    2018-06-23

    The effect of short-chain inulin on the rheological and sensory properties of reduced fat set coconut milk yoghurt was studied with whole fat coconut milk yoghurt as reference. The concentration of short-chain inulin was varied at 0, 5, 10, 15, and 20% w/v respectively. All the yoghurt samples displayed higher elastic modulus G' than viscous modulus G". However, 15% inulin yoghurt had the highest value for G' & G". The 15 and 20% inulin yoghurts displayed high yield stress (1036.7 ± 2.39 & 368.23 ± 0.30 Pa). Addition threshold of 15% was established, beyond this level there was a significant decrease in the yield stress, firmness, cohesiveness and consistency values of the reduced fat yoghurts. Using Pearson correlation analysis, no correlation was observed between firmness and yield stress, Similarly, there was significant correlation between the yield stress and instrumental viscosity r = 0.957; p < 0.01. Furthermore, all yoghurt samples displayed strain thinning behavior except whole fat yoghurt. Carbohydrate was affected by inulin incorporation. Addition of short chain inulin improved sensorial characteristics such as taste, and flavor, but did not display significant difference in color and odor of yoghurt samples. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.

    PubMed

    Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E

    1999-05-20

    The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.

  16. Probe Heating Method for the Analysis of Solid Samples Using a Portable Mass Spectrometer

    PubMed Central

    Kumano, Shun; Sugiyama, Masuyuki; Yamada, Masuyoshi; Nishimura, Kazushige; Hasegawa, Hideki; Morokuma, Hidetoshi; Inoue, Hiroyuki; Hashimoto, Yuichiro

    2015-01-01

    We previously reported on the development of a portable mass spectrometer for the onsite screening of illicit drugs, but our previous sampling system could only be used for liquid samples. In this study, we report on an attempt to develop a probe heating method that also permits solid samples to be analyzed using a portable mass spectrometer. An aluminum rod is used as the sampling probe. The powdered sample is affixed to the sampling probe or a droplet of sample solution is placed on the tip of the probe and dried. The probe is then placed on a heater to vaporize the sample. The vapor is then introduced into the portable mass spectrometer and analyzed. With the heater temperature set to 130°C, the developed system detected 1 ng of methamphetamine, 1 ng of amphetamine, 3 ng of 3,4-methylenedioxymethamphetamine, 1 ng of 3,4-methylenedioxyamphetamine, and 0.3 ng of cocaine. Even from mixtures consisting of clove powder and methamphetamine powder, methamphetamine ions were detected by tandem mass spectrometry. The developed probe heating method provides a simple method for the analysis of solid samples. A portable mass spectrometer incorporating this method would thus be useful for the onsite screening of illicit drugs. PMID:26819909

  17. The Drosophila genome nexus: a population genomic resource of 623 Drosophila melanogaster genomes, including 197 from a single ancestral range population.

    PubMed

    Lack, Justin B; Cardeno, Charis M; Crepeau, Marc W; Taylor, William; Corbett-Detig, Russell B; Stevens, Kristian A; Langley, Charles H; Pool, John E

    2015-04-01

    Hundreds of wild-derived Drosophila melanogaster genomes have been published, but rigorous comparisons across data sets are precluded by differences in alignment methodology. The most common approach to reference-based genome assembly is a single round of alignment followed by quality filtering and variant detection. We evaluated variations and extensions of this approach and settled on an assembly strategy that utilizes two alignment programs and incorporates both substitutions and short indels to construct an updated reference for a second round of mapping prior to final variant detection. Utilizing this approach, we reassembled published D. melanogaster population genomic data sets and added unpublished genomes from several sub-Saharan populations. Most notably, we present aligned data from phase 3 of the Drosophila Population Genomics Project (DPGP3), which provides 197 genomes from a single ancestral range population of D. melanogaster (from Zambia). The large sample size, high genetic diversity, and potentially simpler demographic history of the DPGP3 sample will make this a highly valuable resource for fundamental population genetic research. The complete set of assemblies described here, termed the Drosophila Genome Nexus, presently comprises 623 consistently aligned genomes and is publicly available in multiple formats with supporting documentation and bioinformatic tools. This resource will greatly facilitate population genomic analysis in this model species by reducing the methodological differences between data sets. Copyright © 2015 by the Genetics Society of America.

  18. A convolutional neural network-based screening tool for X-ray serial crystallography

    PubMed Central

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K.

    2018-01-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. PMID:29714177

  19. Optical air-coupled NDT system with ultra-broad frequency bandwidth (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fischer, Balthasar; Rohringer, Wolfgang; Heine, Thomas

    2017-05-01

    We present a novel, optical ultrasound airborne acoustic testing setup exhibiting a frequency bandwidth of 1MHz in air. The sound waves are detected by a miniaturized Fabry-Pérot interferometer (2mm cavity) whilst the sender consists of a thermoacoustic emitter or a short laser pulse We discuss characterization measurements and C-scans of a selected set of samples, including Carbon fiber reinforced polymer (CFRP). The high detector sensitivity allows for an increased penetration depth. The high frequency and the small transducer dimensions lead to a compelling image resolution.

  20. Note: The performance of new density functionals for a recent blind test of non-covalent interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.

  1. Note: The performance of new density functionals for a recent blind test of non-covalent interactions

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2016-11-09

    Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.

  2. A convolutional neural network-based screening tool for X-ray serial crystallography.

    PubMed

    Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K

    2018-05-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.

  3. A convolutional neural network-based screening tool for X-ray serial crystallography

    DOE PAGES

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; ...

    2018-04-24

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.

  4. A convolutional neural network-based screening tool for X-ray serial crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.

  5. Device Control Using Gestures Sensed from EMG

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  6. Attendance and alcohol use at parties and bars in college: a national survey of current drinkers.

    PubMed

    Harford, Thomas C; Wechsler, Henry; Seibring, Mark

    2002-11-01

    This study examines attendance and alcohol use at parties and bars among college students by gender, residence, year in school and legal drinking age. The study participants were respondents in the 1997 and 1999 Harvard School of Public Health College Alcohol Study (CAS). The combined sample consisted of 12,830 students (61% women) who reported use of alcohol in the past 30 days prior to interview. Their responses provided information on attendance and alcohol use at parties (dormitory, fraternity, off campus) and off-campus bars. Logistic regression analyses examined the influence of gender, residence, year in school and legal drinking age related to attendance, drinking/non-drinking and heavy drinking (5 or more drinks) at each select setting. Consistent with the literature, fraternity/ sorority parties were occasions of heavy drinking (49%) among drinkers in those settings, yet they drew upon smaller proportions of students (36%) when compared to off-campus parties (75%) and off-campus bars (68%). Off-campus parties (45%) and bars (37%) were also occasions for heavy drinking among drinkers in these settings. College residence was shown to relate to differential exposure to drinking settings, but residence had less impact on the decision to drink and the level of heavy drinking. Attendance at parties decreased with advance in school years, but attendance at off-campus bars increased. Although heavy drinking at off-campus bars decreased with advancing grade year in school, slightly higher proportions of under-age students (41%) compared to students of legal drinking age (35%) exhibited heavy drinking at off-campus bars. The identification of high-risk settings and their correlates serves to better understand the development of heavy drinking on college campuses. Off-campus parties, as compared to campus parties and bars, may pose greater difficulties related to successful intervention.

  7. The Consumer Assessment of Healthcare Providers and Systems (CAHPS) cultural competence (CC) item set.

    PubMed

    Weech-Maldonado, Robert; Carle, Adam; Weidmer, Beverly; Hurtado, Margarita; Ngo-Metzger, Quyen; Hays, Ron D

    2012-09-01

    There is a need for reliable and valid measures of cultural competence (CC) from the patient's perspective. This paper evaluates the reliability and validity of the Consumer Assessments of Healthcare Providers and Systems (CAHPS) CC item set. Using 2008 survey data, we assessed the internal consistency of the CAHPS CC scales using the Cronbach α's and examined the validity of the measures using exploratory and confirmatory factor analysis, multitrait scaling analysis, and regression analysis. A random stratified sample (based on race/ethnicity and language) of 991 enrollees, younger than 65 years, from 2 Medicaid managed care plans in California and New York. CAHPS CC item set after excluding screener items and ratings. Confirmatory factor analysis (Comparative Fit Index=0.98, Tucker Lewis Index=0.98, and Root Mean Square Error or Approximation=0.06) provided support for a 7-factor structure: Doctor Communication--Positive Behaviors, Doctor Communication--Negative Behaviors, Doctor Communication--Health Promotion, Doctor Communication--Alternative Medicine, Shared Decision-Making, Equitable Treatment, and Trust. Item-total correlations (corrected for item overlap) for the 7 scales exceeded 0.40. Exploratory factor analysis showed support for 1 additional factor: Access to Interpreter Services. Internal consistency reliability estimates ranged from 0.58 (Alternative Medicine) to 0.92 (Positive Behaviors) and was 0.70 or higher for 4 of the 8 composites. All composites were positively and significantly associated with the overall doctor rating. The CAHPS CC 26-item set demonstrates adequate measurement properties and can be used as a supplemental item set to the CAHPS Clinician and Group Surveys in assessing culturally competent care from the patient's perspective.

  8. Gene Network Rewiring to Study Melanoma Stage Progression and Elements Essential for Driving Melanoma

    PubMed Central

    Kaushik, Abhinav; Bhatia, Yashuma; Ali, Shakir; Gupta, Dinesh

    2015-01-01

    Metastatic melanoma patients have a poor prognosis, mainly attributable to the underlying heterogeneity in melanoma driver genes and altered gene expression profiles. These characteristics of melanoma also make the development of drugs and identification of novel drug targets for metastatic melanoma a daunting task. Systems biology offers an alternative approach to re-explore the genes or gene sets that display dysregulated behaviour without being differentially expressed. In this study, we have performed systems biology studies to enhance our knowledge about the conserved property of disease genes or gene sets among mutually exclusive datasets representing melanoma progression. We meta-analysed 642 microarray samples to generate melanoma reconstructed networks representing four different stages of melanoma progression to extract genes with altered molecular circuitry wiring as compared to a normal cellular state. Intriguingly, a majority of the melanoma network-rewired genes are not differentially expressed and the disease genes involved in melanoma progression consistently modulate its activity by rewiring network connections. We found that the shortlisted disease genes in the study show strong and abnormal network connectivity, which enhances with the disease progression. Moreover, the deviated network properties of the disease gene sets allow ranking/prioritization of different enriched, dysregulated and conserved pathway terms in metastatic melanoma, in agreement with previous findings. Our analysis also reveals presence of distinct network hubs in different stages of metastasizing tumor for the same set of pathways in the statistically conserved gene sets. The study results are also presented as a freely available database at http://bioinfo.icgeb.res.in/m3db/. The web-based database resource consists of results from the analysis presented here, integrated with cytoscape web and user-friendly tools for visualization, retrieval and further analysis. PMID:26558755

  9. The effects of a pulsed Nd:YAG laser on subgingival bacterial flora and on cementum: an in vivo study.

    PubMed

    Ben Hatit, Y; Blum, R; Severin, C; Maquin, M; Jabro, M H

    1996-06-01

    The purpose of this study was to compare the effects of scaling and Nd:YAG laser treatments with that of scaling alone on cementum and levels of Actinobacillus actinomycetemcomitans, Bacteroides forsythus, Porphyromonas gingivalis, and Treponema denticola. Study samples consisted of 14 patients, age 30 to 75 years, 8 females and 6 males, with a total of 150 periodontally involved sites with probing depth > or = 5 mm. Group A consisted of 100 pockets that were subdivided into 4 equal groups that were treated with conventional scaling and pulsed Nd:YAG laser using an optic fiber of 300 microns and 4 different power levels as follows: Group 1: P = 0.8 W, f = 10 Hz, E = 100 mJ/pulse; Group 2: P = 1.0 W, f = 1.0 Hz, E = 100 mJ/pulse; Group 3: P = 1.2 W, f = 12 Hz, E = 100 mJ/purse; and Group 4: P = 1.5 W, f = 15 Hz, E = 100 mJ/pulse. The time of each treatment was 60 sec per pocket in all 4 groups. Group B consisted of 50 pockets that were treated by conventional scaling alone and served as a control group. Microbiological samples from group A were collected before scaling; after scaling = before laser, just after laser, 2 weeks later, 6 weeks later, and 10 weeks later. Microbiological samples from group B were collected before scaling, after scaling, 6 weeks later, and 10 weeks later. Microbiological analysis of all samples was done by the Institute Für Angewandte Immunologie (IAI) method. The effects of laser on root surfaces were assessed by SEM examination and the sample consisted of 13 teeth from 5 different patients. Four sets of 3 teeth each were treated with Nd:YAG laser using 0.8, 1.0, 1.2, and 1.5 W, respectively. One tooth was just scaled and not treated with laser to serve as a control. Microbiological analysis of Group A samples indicated posttreatment reduction in levels of all 4 bacterial types tested compared to pretreatment levels and Group B controls. SEM examination of the specimens treated with Nd:YAG laser at different levels exhibited different features of root surface alterations.

  10. Development and validation of the Australian version of the Birth Satisfaction Scale-Revised (BSS-R).

    PubMed

    Jefford, Elaine; Hollins Martin, Caroline J; Martin, Colin R

    2018-02-01

    The 10-item Birth Satisfaction Scale-Revised (BSS-R) has recently been endorsed by international expert consensus for global use as the birth satisfaction outcome measure of choice. English-language versions of the tool include validated UK and US versions; however, the instrument has not, to date, been contextualised and validated in an Australian English-language version. The current investigation sought to develop and validate an English-language version of the tool for use within the Australian context. A two-stage study. Following review and modification by expert panel, the Australian BSS-R (A-BSS-R) was (Stage 1) evaluated for factor structure, internal consistency, known-groups discriminant validity and divergent validity. Stage 2 directly compared the A-BSS-R data set with the original UK data set to determine the invariance characteristics of the new instrument. Participants were a purposive sample of Australian postnatal women (n = 198). The A-BSS-R offered a good fit to data consistent with the BSS-R tridimensional measurement model and was found to be conceptually and measurement equivalent to the UK version. The A-BSS-R demonstrated excellent known-groups discriminant validity, generally good divergent validity and overall good internal consistency. The A-BSS-R represents a robust and valid measure of the birth satisfaction concept suitable for use within Australia and appropriate for application to International comparative studies.

  11. Efficient Cancer Detection Using Multiple Neural Networks.

    PubMed

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  12. Efficient Cancer Detection Using Multiple Neural Networks

    PubMed Central

    Gregory, William D.

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings. PMID:29282435

  13. Analysis and comparison of glass fragments by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and ICP-MS.

    PubMed

    Trejos, Tatiana; Montero, Shirly; Almirall, José R

    2003-08-01

    The discrimination potential of Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is compared with previously reported solution ICP-MS methods using external calibration (EC) with internal standardization and a newly reported solution isotope dilution (ID) method for the analysis of two different glass populations. A total of 91 different glass samples were used for the comparison study; refractive index and elemental composition were measured by the techniques mentioned above. One set consisted of 45 headlamps taken from a variety of automobiles that represents a range of 20 years of manufacturing dates. A second set consisted of 46 automotive glasses (side windows, rear windows, and windshields) representing casework glass from different vehicle manufacturers over several years. The element menu for the LA-ICP-MS and EC-ICP-MS methods include Mg, Al, Ca, Mn, Ce, Ti, Zr, Sb, Ga, Ba, Rb, Sm, Sr, Hf, La, and Pb. The ID method was limited to the analysis of two isotopes each of Mg, Sr, Zr, Sb, Ba, Sm, Hf, and Pb. Laser ablation analyses were performed with a Q switched Nd:YAG, 266 nm, 6 mJ output energy laser. The laser was used in depth profile mode while sampling using a 50 microm spot size for 50 sec at 10 Hz (500 shots). The typical bias for the analysis of NIST 612 by LA-ICP-MS was less than 5% in all cases and typically better than 5% for most isotopes. The precision for the vast majority of the element menu was determined generally less than 10% for all the methods when NIST 612 was measured (40 microg x g(-1)). Method detection limits (MDL) for the EC and LA-ICP-MS methods were similar and generally reported as less than 1 microg x g(-1) for the analysis of NIST 612. While the solution sample introduction methods using EC and ID presented excellent sensitivity and precision, these methods have the disadvantages of destroying the sample, and also involve complex sample preparation. The laser ablation method was simpler, faster, and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can offer an excellent alternative to solution analysis of glass in forensic casework samples.

  14. A global analysis of Y-chromosomal haplotype diversity for 23 STR loci.

    PubMed

    Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M T; Santos, Lorna H; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F; Domingues, Patricia M; Chamoun, Wafaa Takash; Coble, Michael D; Hill, Carolyn R; Corach, Daniel; Caputo, Mariela; D'Amato, Maria E; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H D; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S Moura; Nogueira, Tatiana L S; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H; Gwynne, Gareth M; Jobling, Mark A; Whittle, Martin R; Sumita, Denilce R; Wolańska-Nowak, Paulina; Yong, Rita Y Y; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz

    2014-09-01

    In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  15. Wide-Field Imaging Interferometry Spatial-Spectral Image Synthesis Algorithms

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.; Leisawitz, David T.; Rinehart, Stephen A.; Memarsadeghi, Nargess; Sinukoff, Evan J.

    2012-01-01

    Developed is an algorithmic approach for wide field of view interferometric spatial-spectral image synthesis. The data collected from the interferometer consists of a set of double-Fourier image data cubes, one cube per baseline. These cubes are each three-dimensional consisting of arrays of two-dimensional detector counts versus delay line position. For each baseline a moving delay line allows collection of a large set of interferograms over the 2D wide field detector grid; one sampled interferogram per detector pixel per baseline. This aggregate set of interferograms, is algorithmically processed to construct a single spatial-spectral cube with angular resolution approaching the ratio of the wavelength to longest baseline. The wide field imaging is accomplished by insuring that the range of motion of the delay line encompasses the zero optical path difference fringe for each detector pixel in the desired field-of-view. Each baseline cube is incoherent relative to all other baseline cubes and thus has only phase information relative to itself. This lost phase information is recovered by having point, or otherwise known, sources within the field-of-view. The reference source phase is known and utilized as a constraint to recover the coherent phase relation between the baseline cubes and is key to the image synthesis. Described will be the mathematical formalism, with phase referencing and results will be shown using data collected from NASA/GSFC Wide-Field Imaging Interferometry Testbed (WIIT).

  16. The valuation of the EQ-5D in Portugal.

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Oppe, Mark

    2014-03-01

    The EQ-5D is a preference-based measure widely used in cost-utility analysis (CUA). Several countries have conducted surveys to derive value sets, but this was not the case for Portugal. The purpose of this study was to estimate a value set for the EQ-5D for Portugal using the time trade-off (TTO). A representative sample of the Portuguese general population (n = 450) stratified by age and gender valued 24 health states. Face-to-face interviews were conducted by trained interviewers. Each respondent ranked and valued seven health states using the TTO. Several models were estimated at both the individual and aggregated levels to predict health state valuations. Alternative functional forms were considered to account for the skewed distribution of these valuations. The models were analyzed in terms of their coefficients, overall fit and the ability for predicting the TTO values. Random effects models were estimated using generalized least squares and were robust across model specification. The results are generally consistent with other value sets. This research provides the Portuguese EQ-5D value set based on the preferences of the Portuguese general population as measured by the TTO. This value set is recommended for use in CUA conducted in Portugal.

  17. Point Analysis in Java applied to histological images of the perforant pathway: a user's account.

    PubMed

    Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán

    2008-01-01

    The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.

  18. Four-gene Pan-African Blood Signature Predicts Progression to Tuberculosis.

    PubMed

    Suliman, Sara; Thompson, Ethan; Sutherland, Jayne; Weiner Rd, January; Ota, Martin O C; Shankar, Smitha; Penn-Nicholson, Adam; Thiel, Bonnie; Erasmus, Mzwandile; Maertzdorf, Jeroen; Duffy, Fergal J; Hill, Philip C; Hughes, E Jane; Stanley, Kim; Downing, Katrina; Fisher, Michelle L; Valvo, Joe; Parida, Shreemanta K; van der Spuy, Gian; Tromp, Gerard; Adetifa, Ifedayo M O; Donkor, Simon; Howe, Rawleigh; Mayanja-Kizza, Harriet; Boom, W Henry; Dockrell, Hazel; Ottenhoff, Tom H M; Hatherill, Mark; Aderem, Alan; Hanekom, Willem A; Scriba, Thomas J; Kaufmann, Stefan He; Zak, Daniel E; Walzl, Gerhard

    2018-04-06

    Contacts of tuberculosis (TB) patients constitute an important target population for preventative measures as they are at high risk of infection with Mycobacterium tuberculosis and progression to disease. We investigated biosignatures with predictive ability for incident tuberculosis. In a case-control study nested within the Grand Challenges 6-74 longitudinal HIV-negative African cohort of exposed household contacts, we employed RNA sequencing, polymerase chain reaction (PCR) and the Pair Ratio algorithm in a training/test set approach. Overall, 79 progressors, who developed tuberculosis between 3 and 24 months following exposure, and 328 matched non-progressors, who remained healthy during 24 months of follow-up, were investigated. A four-transcript signature (RISK4), derived from samples in a South African and Gambian training set, predicted progression up to two years before onset of disease in blinded test set samples from South Africa, The Gambia and Ethiopia with little population-associated variability and also validated on an external cohort of South African adolescents with latent Mycobacterium tuberculosis infection. By contrast, published diagnostic or prognostic tuberculosis signatures predicted on samples from some but not all 3 countries, indicating site-specific variability. Post-hoc meta-analysis identified a single gene pair, C1QC/TRAV27, that would consistently predict TB progression in household contacts from multiple African sites but not in infected adolescents without known recent exposure events. Collectively, we developed a simple whole blood-based PCR test to predict tuberculosis in household contacts from diverse African populations, with potential for implementation in national TB contact investigation programs.

  19. Sensitive and Specific Detection of Early Gastric Cancer Using DNA Methylation Analysis of Gastric Washes

    PubMed Central

    Watanabe, Yoshiyuki; Kim, Hyun Soo; Castoro, Ryan J.; Chung, Woonbok; Estecio, Marcos R. H.; Kondo, Kimie; Guo, Yi; Ahmed, Saira S.; Toyota, Minoru; Itoh, Fumio; Suk, Ki Tae; Cho, Mee-Yon; Shen, Lanlan; Jelinek, Jaroslav; Issa, Jean-Pierre J.

    2009-01-01

    Background & Aims Aberrant DNA methylation is an early and frequent process in gastric carcinogenesis and could be useful for detection of gastric neoplasia. We hypothesized that methylation analysis of DNA recovered from gastric washes could be used to detect gastric cancer. Methods We studied 51 candidate genes in 7 gastric cancer cell lines and 24 samples (training set) and identified 6 for further studies. We examined the methylation status of these genes in a test set consisting of 131 gastric neoplasias at various stages. Finally, we validated the 6 candidate genes in a different population of 40 primary gastric cancer samples and 113 non-neoplastic gastric mucosa samples. Results 6 genes (MINT25, RORA, GDNF, ADAM23, PRDM5, MLF1) showed frequent differential methylation between gastric cancer and normal mucosa in the training, test and validation sets. GDNF and MINT25 were most sensitive molecular markers of early stage gastric cancer while PRDM5 and MLF1 were markers of a field defect. There was a close correlation (r=0.5 to 0.9, p=0.03 to 0.001) between methylation levels in tumor biopsy and gastric washes. MINT25 methylation had the best sensitivity (90%), specificity (96%), and area under the ROC curve (0.961) in terms of tumor detection in gastric washes. Conclusions These findings suggest MINT25 is a sensitive and specific marker for screening in gastric cancer. Additionally we have developed a new methodology for gastric cancer detection by DNA methylation in gastric washes. PMID:19375421

  20. Development of modern human subadult age and sex estimation standards using multi-slice computed tomography images from medical examiner's offices

    NASA Astrophysics Data System (ADS)

    Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.

    2016-10-01

    Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.

  1. Development of a direct observation Measure of Environmental Qualities of Activity Settings.

    PubMed

    King, Gillian; Rigby, Patty; Batorowicz, Beata; McMain-Klein, Margot; Petrenchik, Theresa; Thompson, Laura; Gibson, Michelle

    2014-08-01

    The aim of this study was to develop an observer-rated measure of aesthetic, physical, social, and opportunity-related qualities of leisure activity settings for young people (with or without disabilities). Eighty questionnaires were completed by sets of raters who independently rated 22 community/home activity settings. The scales of the 32-item Measure of Environmental Qualities of Activity Settings (MEQAS; Opportunities for Social Activities, Opportunities for Physical Activities, Pleasant Physical Environment, Opportunities for Choice, Opportunities for Personal Growth, and Opportunities to Interact with Adults) were determined using principal components analyses. Test-retest reliability was determined for eight activity settings, rated twice (4-6wk interval) by a trained rater. The factor structure accounted for 80% of the variance. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy was 0.73. Cronbach's alphas for the scales ranged from 0.76 to 0.96, and interrater reliabilities (ICCs) ranged from 0.60 to 0.93. Test-retest reliabilities ranged from 0.70 to 0.90. Results suggest that the MEQAS has a sound factor structure and preliminary evidence of internal consistency, interrater, and test-retest reliability. The MEQAS is the first observer-completed measure of environmental qualities of activity settings. The MEQAS allows researchers to assess comprehensively qualities and affordances of activity settings, and can be used to design and assess environmental qualities of programs for young people. © 2014 Mac Keith Press.

  2. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  3. Recursive SVM biomarker selection for early detection of breast cancer in peripheral blood.

    PubMed

    Zhang, Fan; Kaufman, Howard L; Deng, Youping; Drabier, Renee

    2013-01-01

    Breast cancer is worldwide the second most common type of cancer after lung cancer. Traditional mammography and Tissue Microarray has been studied for early cancer detection and cancer prediction. However, there is a need for more reliable diagnostic tools for early detection of breast cancer. This can be a challenge due to a number of factors and logistics. First, obtaining tissue biopsies can be difficult. Second, mammography may not detect small tumors, and is often unsatisfactory for younger women who typically have dense breast tissue. Lastly, breast cancer is not a single homogeneous disease but consists of multiple disease states, each arising from a distinct molecular mechanism and having a distinct clinical progression path which makes the disease difficult to detect and predict in early stages. In the paper, we present a Support Vector Machine based on Recursive Feature Elimination and Cross Validation (SVM-RFE-CV) algorithm for early detection of breast cancer in peripheral blood and show how to use SVM-RFE-CV to model the classification and prediction problem of early detection of breast cancer in peripheral blood.The training set which consists of 32 health and 33 cancer samples and the testing set consisting of 31 health and 34 cancer samples were randomly separated from a dataset of peripheral blood of breast cancer that is downloaded from Gene Express Omnibus. First, we identified the 42 differentially expressed biomarkers between "normal" and "cancer". Then, with the SVM-RFE-CV we extracted 15 biomarkers that yield zero cross validation score. Lastly, we compared the classification and prediction performance of SVM-RFE-CV with that of SVM and SVM Recursive Feature Elimination (SVM-RFE). We found that 1) the SVM-RFE-CV is suitable for analyzing noisy high-throughput microarray data, 2) it outperforms SVM-RFE in the robustness to noise and in the ability to recover informative features, and 3) it can improve the prediction performance (Area Under Curve) in the testing data set from 0.5826 to 0.7879. Further pathway analysis showed that the biomarkers are associated with Signaling, Hemostasis, Hormones, and Immune System, which are consistent with previous findings. Our prediction model can serve as a general model for biomarker discovery in early detection of other cancers. In the future, Polymerase Chain Reaction (PCR) is planned for validation of the ability of these potential biomarkers for early detection of breast cancer.

  4. KIKI-net: cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images.

    PubMed

    Eo, Taejoon; Jun, Yohan; Kim, Taeseong; Jang, Jinseong; Lee, Ho-Joon; Hwang, Dosik

    2018-04-06

    To demonstrate accurate MR image reconstruction from undersampled k-space data using cross-domain convolutional neural networks (CNNs) METHODS: Cross-domain CNNs consist of 3 components: (1) a deep CNN operating on the k-space (KCNN), (2) a deep CNN operating on an image domain (ICNN), and (3) an interleaved data consistency operations. These components are alternately applied, and each CNN is trained to minimize the loss between the reconstructed and corresponding fully sampled k-spaces. The final reconstructed image is obtained by forward-propagating the undersampled k-space data through the entire network. Performances of K-net (KCNN with inverse Fourier transform), I-net (ICNN with interleaved data consistency), and various combinations of the 2 different networks were tested. The test results indicated that K-net and I-net have different advantages/disadvantages in terms of tissue-structure restoration. Consequently, the combination of K-net and I-net is superior to single-domain CNNs. Three MR data sets, the T 2 fluid-attenuated inversion recovery (T 2 FLAIR) set from the Alzheimer's Disease Neuroimaging Initiative and 2 data sets acquired at our local institute (T 2 FLAIR and T 1 weighted), were used to evaluate the performance of 7 conventional reconstruction algorithms and the proposed cross-domain CNNs, which hereafter is referred to as KIKI-net. KIKI-net outperforms conventional algorithms with mean improvements of 2.29 dB in peak SNR and 0.031 in structure similarity. KIKI-net exhibits superior performance over state-of-the-art conventional algorithms in terms of restoring tissue structures and removing aliasing artifacts. The results demonstrate that KIKI-net is applicable up to a reduction factor of 3 to 4 based on variable-density Cartesian undersampling. © 2018 International Society for Magnetic Resonance in Medicine.

  5. The Swedish version of the multidimensional scale of perceived social support (MSPSS)--a psychometric evaluation study in women with hirsutism and nursing students.

    PubMed

    Ekbäck, Maria; Benzein, Eva; Lindberg, Magnus; Arestedt, Kristofer

    2013-10-10

    The Multidimensional Scale of Perceived Social Support (MSPSS) is a short instrument, developed to assess perceived social support. The original English version has been widely used. The original scale has demonstrated satisfactory psychometric properties in different settings, but no validated Swedish version has been available. The aim was therefore to translate, adapt and psychometrically evaluate the Multidimensional Scale of Perceived Social Support for use in a Swedish context. In total 281 participants accepted to join the study, a main sample of 127 women with hirsutism and a reference sample of 154 nursing students. The MSPSS was translated and culturally adapted according to the rigorous official process approved by WHO. The psychometric evaluation included item analysis, evaluation of factor structure, known-group validity, internal consistency and reproducibility. The original three-factor structure was reproduced in the main sample of women with hirsutism. An equivalent factor structure was demonstrated in a cross-validation, based on the reference sample of nursing students. Known-group validity was supported and internal consistency was good for all scales (α = 0.91-0.95). The test-retest showed acceptable to very good reproducibility for the items (κw = 0.58-0.85) and the scales (ICC = 0.89-0.92; CCC = 0.89-0.92). The Swedish version of the MSPSS is a multidimensional scale with sound psychometric properties in the present study sample. The simple and short format makes it a useful tool for measuring perceived social support.

  6. Population clustering based on copy number variations detected from next generation sequencing data.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping

    2014-08-01

    Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.

  7. Validation of reference genes for RT-qPCR studies of gene expression in banana fruit under different experimental conditions.

    PubMed

    Chen, Lei; Zhong, Hai-ying; Kuang, Jian-fei; Li, Jian-guo; Lu, Wang-jin; Chen, Jian-ye

    2011-08-01

    Reverse transcription quantitative real-time PCR (RT-qPCR) is a sensitive technique for quantifying gene expression, but its success depends on the stability of the reference gene(s) used for data normalization. Only a few studies on validation of reference genes have been conducted in fruit trees and none in banana yet. In the present work, 20 candidate reference genes were selected, and their expression stability in 144 banana samples were evaluated and analyzed using two algorithms, geNorm and NormFinder. The samples consisted of eight sample sets collected under different experimental conditions, including various tissues, developmental stages, postharvest ripening, stresses (chilling, high temperature, and pathogen), and hormone treatments. Our results showed that different suitable reference gene(s) or combination of reference genes for normalization should be selected depending on the experimental conditions. The RPS2 and UBQ2 genes were validated as the most suitable reference genes across all tested samples. More importantly, our data further showed that the widely used reference genes, ACT and GAPDH, were not the most suitable reference genes in many banana sample sets. In addition, the expression of MaEBF1, a gene of interest that plays an important role in regulating fruit ripening, under different experimental conditions was used to further confirm the validated reference genes. Taken together, our results provide guidelines for reference gene(s) selection under different experimental conditions and a foundation for more accurate and widespread use of RT-qPCR in banana.

  8. Concentration of ions in selected bottled water samples sold in Malaysia

    NASA Astrophysics Data System (ADS)

    Aris, Ahmad Zaharin; Kam, Ryan Chuan Yang; Lim, Ai Phing; Praveena, Sarva Mangala

    2013-03-01

    Many consumers around the world, including Malaysians, have turned to bottled water as their main source of drinking water. The aim of this study is to determine the physical and chemical properties of bottled water samples sold in Selangor, Malaysia. A total of 20 bottled water brands consisting of `natural mineral (NM)' and `packaged drinking (PD)' types were randomly collected and analyzed for their physical-chemical characteristics: hydrogen ion concentration (pH), electrical conductivity (EC) and total dissolved solids (TDS), selected major ions: calcium (Ca), potassium (K), magnesium (Mg) and sodium (Na), and minor trace constituents: copper (Cu) and zinc (Zn) to ascertain their suitability for human consumption. The results obtained were compared with guideline values recommended by World Health Organization (WHO) and Malaysian Ministry of Health (MMOH), respectively. It was found that all bottled water samples were in accordance with the guidelines set by WHO and MMOH except for one sample (D3) which was below the pH limit of 6.5. Both NM and PD bottled water were dominated by Na + K > Ca > Mg. Low values for EC and TDS in the bottled water samples showed that water was deficient in essential elements, likely an indication that these were removed by water treatment. Minerals like major ions were present in very low concentrations which could pose a risk to individuals who consume this water on a regular basis. Generally, the overall quality of the supplied bottled water was in accordance to standards and guidelines set by WHO and MMOH and safe for consumption.

  9. Generalizability of findings from randomized controlled trials: application to the National Institute of Drug Abuse Clinical Trials Network.

    PubMed

    Susukida, Ryoko; Crum, Rosa M; Ebnesajjad, Cyrus; Stuart, Elizabeth A; Mojtabai, Ramin

    2017-07-01

    To compare randomized controlled trial (RCT) sample treatment effects with the population effects of substance use disorder (SUD) treatment. Statistical weighting was used to re-compute the effects from 10 RCTs such that the participants in the trials had characteristics that resembled those of patients in the target populations. Multi-site RCTs and usual SUD treatment settings in the United States. A total of 3592 patients in 10 RCTs and 1 602 226 patients from usual SUD treatment settings between 2001 and 2009. Three outcomes of SUD treatment were examined: retention, urine toxicology and abstinence. We weighted the RCT sample treatment effects using propensity scores representing the conditional probability of participating in RCTs. Weighting the samples changed the significance of estimated sample treatment effects. Most commonly, positive effects of trials became statistically non-significant after weighting (three trials for retention and urine toxicology and one trial for abstinence); also, non-significant effects became significantly positive (one trial for abstinence) and significantly negative effects became non-significant (two trials for abstinence). There was suggestive evidence of treatment effect heterogeneity in subgroups that are under- or over-represented in the trials, some of which were consistent with the differences in average treatment effects between weighted and unweighted results. The findings of randomized controlled trials (RCTs) for substance use disorder treatment do not appear to be directly generalizable to target populations when the RCT samples do not reflect adequately the target populations and there is treatment effect heterogeneity across patient subgroups. © 2017 Society for the Study of Addiction.

  10. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  11. Diversity-based reasoning in children.

    PubMed

    Heit, E; Hahn, U

    2001-12-01

    One of the hallmarks of inductive reasoning by adults is the diversity effect, namely that people draw stronger inferences from a diverse set of evidence than from a more homogenous set of evidence. However, past developmental work has not found consistent diversity effects with children age 9 and younger. We report robust sensitivity to diversity in children as young as 5, using everyday stimuli such as pictures of objects with people. Experiment 1 showed the basic diversity effect in 5- to 9-year-olds. Experiment 2 showed that, like adults, children restrict their use of diversity information when making inferences about remote categories. Experiment 3 used other stimulus sets to overcome an alternate explanation in terms of sample size rather than diversity effects. Finally, Experiment 4 showed that children more readily draw on diversity when reasoning about objects and their relations with people than when reasoning about objects' internal, hidden properties, thus partially explaining the negative findings of previous work. Relations to cross-cultural work and models of induction are discussed. Copyright 2001 Academic Press.

  12. Prediction of Mutagenicity of Chemicals from Their Calculated Molecular Descriptors: A Case Study with Structurally Homogeneous versus Diverse Datasets.

    PubMed

    Basak, Subhash C; Majumdar, Subhabrata

    2015-01-01

    Variation in high-dimensional data is often caused by a few latent factors, and hence dimension reduction or variable selection techniques are often useful in gathering useful information from the data. In this paper we consider two such recent methods: Interrelated two-way clustering and envelope models. We couple these methods with traditional statistical procedures like ridge regression and linear discriminant analysis, and apply them on two data sets which have more predictors than samples (i.e. n < p scenario) and several types of molecular descriptors. One of these datasets consists of a congeneric group of Amines while the other has a much diverse collection compounds. The difference of prediction results between these two datasets for both the methods supports the hypothesis that for a congeneric set of compounds, descriptors of a certain type are enough to provide good QSAR models, but as the data set grows diverse including a variety of descriptors can improve model quality considerably.

  13. A Brief Measure of Narcissism Among Female Juvenile Delinquents and Community Youths: The Narcissistic Personality Inventory-13.

    PubMed

    Pechorro, Pedro; Maroco, João; Ray, James V; Gonçalves, Rui Abrunhosa; Nunes, Cristina

    2018-06-01

    Research on narcissism has a long tradition, but there is limited knowledge regarding its application among female youth, especially for forensic samples of incarcerated female youth. Drawing on 377 female adolescents (103 selected from forensic settings and 274 selected from school settings) from Portugal, the current study is the first to examine simultaneously the psychometric properties of a brief version of the Narcissistic Personality Inventory (NPI-13) among females drawn from incarcerated and community settings. The results support the three-factor structure model of narcissism after the removal of one item due to its low factor loading. Internal consistency, convergent validity, and discriminant validity showed promising results. In terms of criterion-related validity, significant associations were found with criterion-related variables such as age of criminal onset, conduct disorder, crime severity, violent crimes, and alcohol and drug use. The findings provide support for use of the NPI-13 among female juveniles.

  14. Complementary and Alternative Medicine Use in Infertility: Cultural and Religious Influences in a Multicultural Canadian Setting

    PubMed Central

    Read, Suzanne C.; Carrier, Marie-Eve; Whitley, Rob; Gold, Ian; Tulandi, Togas

    2014-01-01

    Abstract Objectives: To explore the use of complementary and alternative medicine (CAM) for infertility in a multicultural healthcare setting and to compare Western and non-Western infertility patients' reasons for using CAM and the meanings they attribute to CAM use. Design: Qualitative semi-structured interviews using thematic analysis. Settings/location: Two infertility clinics in Montreal, Quebec, Canada. Participants: An ethnoculturally varied sample of 32 heterosexual infertile couples. Results: CAM used included lifestyle changes (e.g., changing diet, exercise), alternative medicine (e.g., acupuncture, herbal medicines), and religious methods (e.g., prayers, religious talismans). Patients expressed three attitudes toward CAM: desperate hope, casual optimism, and amused skepticism. Participants' CAM use was consistent with cultural traditions of health and fertility: Westerners relied primarily on biomedicine and used CAM mainly for relaxation, whereas non-Westerners' CAM use was often influenced by culture-specific knowledge of health, illness and fertility. Conclusions: Understanding patients' CAM use may help clinicians provide culturally sensitive, patient-centered care. PMID:25127071

  15. Reconstruction of Microraptor and the evolution of iridescent plumage.

    PubMed

    Li, Quanguo; Gao, Ke-Qin; Meng, Qingjin; Clarke, Julia A; Shawkey, Matthew D; D'Alba, Liliana; Pei, Rui; Ellison, Mick; Norell, Mark A; Vinther, Jakob

    2012-03-09

    Iridescent feather colors involved in displays of many extant birds are produced by nanoscale arrays of melanin-containing organelles (melanosomes). Data relevant to the evolution of these colors and the properties of melanosomes involved in their generation have been limited. A data set sampling variables of extant avian melanosomes reveals that those forming most iridescent arrays are distinctly narrow. Quantitative comparison of these data with melanosome imprints densely sampled from a previously unknown specimen of the Early Cretaceous feathered Microraptor predicts that its plumage was predominantly iridescent. The capacity for simple iridescent arrays is thus minimally inferred in paravian dinosaurs. This finding and estimation of Microraptor feathering consistent with an ornamental function for the tail suggest a centrality for signaling in early evolution of plumage and feather color.

  16. Surface development of a brazing alloy during heat treatment-a comparison between UHV and APXPS

    NASA Astrophysics Data System (ADS)

    Rullik, L.; Johansson, N.; Bertram, F.; Evertsson, J.; Stenqvist, T.; Lundgren, E.

    2018-01-01

    In an attempt to bridge the pressure gap, APXPS was used to follow the surface development of an aluminum brazing sheet during heating in an ambient oxygen-pressure mimicking the environment of an industrial brazing furnace. The studied aluminum alloy brazing sheet is a composite material consisting of two aluminum alloy standards whose surface is covered with a native aluminum oxide film. To emphasize the necessity of studies of this system in ambient sample environments it is compared to measurements in UHV. Changes in thickness and composition of the surface oxide were followed after heating to 300 °C, 400 °C, and 500 °C. The two sets presented in this paper show that the surface development strongly depends on the environment the sample is heated in.

  17. Mathematics beliefs and instructional strategies in achievement of elementary-school students in Japan: results from the TIMSS 2003 assessment.

    PubMed

    House, J Daniel

    2007-04-01

    Recent findings concerning mathematics assessment indicate that students in Japan consistently score above international averages. Researchers have examined specific mathematics beliefs and instructional strategies associated with mathematics achievement for students in Japan. This study examined relationships among self-beliefs, classroom instructional strategies, and mathematics achievement for a large national sample of students (N=4,207) from the TIMSS 2003 international sample of fourth graders in Japan. Several significant relationships between mathematics beliefs and test scores were found; a number of classroom teaching strategies were also significantly associated with test scores. However, multiple regression using the complete set of five mathematics beliefs and five instructional strategies explained only 25.1% of the variance in mathematics achievement test scores.

  18. Indications of a spatial variation of the fine structure constant.

    PubMed

    Webb, J K; King, J A; Murphy, M T; Flambaum, V V; Carswell, R F; Bainbridge, M B

    2011-11-04

    We previously reported Keck telescope observations suggesting a smaller value of the fine structure constant α at high redshift. New Very Large Telescope (VLT) data, probing a different direction in the Universe, shows an inverse evolution; α increases at high redshift. Although the pattern could be due to as yet undetected systematic effects, with the systematics as presently understood the combined data set fits a spatial dipole, significant at the 4.2 σ level, in the direction right ascension 17.5 ± 0.9 h, declination -58 ± 9 deg. The independent VLT and Keck samples give consistent dipole directions and amplitudes, as do high and low redshift samples. A search for systematics, using observations duplicated at both telescopes, reveals none so far which emulate this result.

  19. Long-slit Spectroscopy of Edge-on Low Surface Brightness Galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Wei; Wu, Hong; Zhu, Yinan

    2017-03-10

    We present long-slit optical spectra of 12 edge-on low surface brightness galaxies (LSBGs) positioned along their major axes. After performing reddening corrections for the emission-line fluxes measured from the extracted integrated spectra, we measured the gas-phase metallicities of our LSBG sample using both the [N ii]/H α and the R {sub 23} diagnostics. Both sets of oxygen abundances show good agreement with each other, giving a median value of 12 + log(O/H) = 8.26 dex. In the luminosity–metallicity plot, our LSBG sample is consistent with the behavior of normal galaxies. In the mass–metallicity diagram, our LSBG sample has lower metallicitiesmore » for lower stellar mass, similar to normal galaxies. The stellar masses estimated from z -band luminosities are comparable to those of prominent spirals. In a plot of the gas mass fraction versus metallicity, our LSBG sample generally agrees with other samples in the high gas mass fraction space. Additionally, we have studied stellar populations of three LSBGs, which have relatively reliable spectral continua and high signal-to-noise ratios, and qualitatively conclude that they have a potential dearth of stars with ages <1 Gyr instead of being dominated by stellar populations with ages >1 Gyr. Regarding the chemical evolution of our sample, the LSBG data appear to allow for up to 30% metal loss, but we cannot completely rule out the closed-box model. Additionally, we find evidence that our galaxies retain up to about three times as much of their metals compared with dwarfs, consistent with metal retention being related to galaxy mass. In conclusion, our data support the view that LSBGs are probably just normal disk galaxies continuously extending to the low end of surface brightness.« less

  20. Psychometric Properties of the Young Children’s Participation and Environment Measure

    PubMed Central

    Khetani, Mary A.; Graham, James E.; Davies, Patricia L.; Law, Mary C.; Simeonsson, Rune J.

    2014-01-01

    Objective To evaluate the psychometric properties of the newly developed Young Children’s Participation and Environment Measure (YC-PEM). Design Cross-sectional study. Setting Data were collected online and by telephone. Participants Convenience and snowball sampling methods were used to survey caregivers of 395 children (93 children with developmental disabilities and delays, 302 without developmental disabilities and delays) between 0–5 years (mean = 35.33 months, SD = 20.29) and residing in North America. Interventions Not applicable. Main Outcome Measure(s) The YC-PEM includes three participation scales and one environment scale. Each scale is assessed across three settings: home, daycare/preschool, and community. Data were analyzed to derive estimates of internal consistency, test-retest reliability, and construct validity. Results Internal consistency ranged from .68 to .96 and .92 to .96 for the participation and environment scales, respectively. Test-retest reliability (2–4 weeks) ranged from .31 to .93 for participation scales and from .91 to .94 for the environment scale. One of three participation scales and the environment scale demonstrated significant group differences by disability status across all three settings, and all four scales discriminated between disability groups for the daycare/preschool setting. The participation scales exhibited small to moderate positive associations with functional performance scores. Conclusion(s) Results lend initial support for the use of the YC-PEM in research to assess the participation of young children with disabilities and delays in terms of 1) home, daycare/preschool, and community participation patterns, 2) perceived environmental supports and barriers to participation, and 3) activity-specific parent strategies to promote participation. PMID:25449189

Top