Sample records for sufficiently large sample

  1. Static versus dynamic sampling for data mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John, G.H.; Langley, P.

    1996-12-31

    As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less

  2. A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS

    EPA Science Inventory

    In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...

  3. Accuracy assessment with complex sampling designs

    Treesearch

    Raymond L. Czaplewski

    2010-01-01

    A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...

  4. Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.

    PubMed

    Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol

    2013-02-01

    A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.

  5. Radiocarbon dating of extinct fauna in the Americas recovered from tar pits

    NASA Astrophysics Data System (ADS)

    Jull, A. J. T.; Iturralde-Vinent, M.; O'Malley, J. M.; MacPhee, R. D. E.; McDonald, H. G.; Martin, P. S.; Moody, J.; Rincón, A.

    2004-08-01

    We have obtained radiocarbon dates by accelerator mass spectrometry on bones of extinct large mammals from tar pits. Results on some samples of Glyptodon and Holmesina (extinct large mammals similar to armadillos) yielded ages of >25 and >21 ka, respectively. We also studied the radiocarbon ages of three different samples of bones from the extinct Cuban ground sloth, Parocnus bownii, which yielded dates ranging from 4960 ± 280 to 11 880 ± 420 yr BP. In order to remove the tar component pretreat the samples sufficiently to obtain reliable dates, we cleaned the samples by Soxhlet extraction in benzene. Resulting samples of collagenous material were often small.

  6. Assays for the activities of polyamine biosynthetic enzymes using intact tissues

    Treesearch

    Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha

    1999-01-01

    Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...

  7. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    NASA Astrophysics Data System (ADS)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  8. Fatigue Crack Propagation in Rail Steels

    DOT National Transportation Integrated Search

    1977-06-01

    In order to establish safe inspection periods of railroad rails, information on fatigue crack growth rates is required. These data should come from a sufficiently large sample of rails presently in service. The reported research consisted of the gene...

  9. Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Kelcey, Ben; Shen, Zuchao

    2017-01-01

    A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…

  10. Application of laboratory permeability data

    USGS Publications Warehouse

    Johnson, A.I.

    1963-01-01

    Some of the basic material contained in this report originally was prepared in 1952 as instructional handouts for ground-water short courses and for training of foreign participants. The material has been revised and expanded and is presented in the present form to make it more readily available to the field hydrologist. Illustrations now present published examples of the applications suggested in the 1952 material. For small areas, a field pumping test is sufficient to predict the characteristics of an aquifer. With a large area under study, the aquifer properties must be determined at many different locations and it is not usually economically feasible to make sufficient field tests to define the aquifer properties in detail for the whole aquifer. By supplementing a few field tests with laboratory permeability data and geologic interpretation, more point measurements representative of the hydrologic properties of the aquifer may be obtained. A sufficient number of samples seldom can be obtained to completely identify the permeability or transmissibility in detail for a project area. However, a few judiciously chosen samples of high quality, combined with good geologic interpretation, often will permit the extrapolation of permeability information over a large area with a fair degree of reliability. The importance of adequate geologic information, as well as the importance of collecting samples representative of at least all major textural units lying within the section or area of study, cannot be overemphasized.

  11. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  12. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  13. Dynamics of airborne fungal populations in a large office building

    NASA Technical Reports Server (NTRS)

    Burge, H. A.; Pierson, D. L.; Groves, T. O.; Strawn, K. F.; Mishra, S. K.

    2000-01-01

    The increasing concern with bioaerosols in large office buildings prompted this prospective study of airborne fungal concentrations in a newly constructed building on the Gulf coast. We collected volumetric culture plate air samples on 14 occasions over the 18-month period immediately following building occupancy. On each sampling occasion, we collected duplicate samples from three sites on three floors of this six-story building, and an outdoor sample. Fungal concentrations indoors were consistently below those outdoors, and no sample clearly indicated fungal contamination in the building, although visible growth appeared in the ventilation system during the course of the study. We conclude that modern mechanically ventilated buildings prevent the intrusion of most of the outdoor fungal aerosol, and that even relatively extensive air sampling protocols may not sufficiently document the microbial status of buildings.

  14. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  15. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  16. The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey

    NASA Technical Reports Server (NTRS)

    Burg, R.; Giacconi, R.; Forman, W.; Jones, C.

    1994-01-01

    We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.

  17. Rapid DNA extraction protocol for detection of alpha-1 antitrypsin deficiency from dried blood spots by real-time PCR.

    PubMed

    Struniawski, R; Szpechcinski, A; Poplawska, B; Skronski, M; Chorostowska-Wynimko, J

    2013-01-01

    The dried blood spot (DBS) specimens have been successfully employed for the large-scale diagnostics of α1-antitrypsin (AAT) deficiency as an easy to collect and transport alternative to plasma/serum. In the present study we propose a fast, efficient, and cost effective protocol of DNA extraction from dried blood spot (DBS) samples that provides sufficient quantity and quality of DNA and effectively eliminates any natural PCR inhibitors, allowing for successful AAT genotyping by real-time PCR and direct sequencing. DNA extracted from 84 DBS samples from chronic obstructive pulmonary disease patients was genotyped for AAT deficiency variants by real-time PCR. The results of DBS AAT genotyping were validated by serum IEF phenotyping and AAT concentration measurement. The proposed protocol allowed successful DNA extraction from all analyzed DBS samples. Both quantity and quality of DNA were sufficient for further real-time PCR and, if necessary, for genetic sequence analysis. A 100% concordance between AAT DBS genotypes and serum phenotypes in positive detection of two major deficiency S- and Z- alleles was achieved. Both assays, DBS AAT genotyping by real-time PCR and serum AAT phenotyping by IEF, positively identified PI*S and PI*Z allele in 8 out of the 84 (9.5%) and 16 out of 84 (19.0%) patients, respectively. In conclusion, the proposed protocol noticeably reduces the costs and the hand-on-time of DBS samples preparation providing genomic DNA of sufficient quantity and quality for further real-time PCR or genetic sequence analysis. Consequently, it is ideally suited for large-scale AAT deficiency screening programs and should be method of choice.

  18. Microarrays for Undergraduate Classes

    ERIC Educational Resources Information Center

    Hancock, Dale; Nguyen, Lisa L.; Denyer, Gareth S.; Johnston, Jill M.

    2006-01-01

    A microarray experiment is presented that, in six laboratory sessions, takes undergraduate students from the tissue sample right through to data analysis. The model chosen, the murine erythroleukemia cell line, can be easily cultured in sufficient quantities for class use. Large changes in gene expression can be induced in these cells by…

  19. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  20. A spinner magnetometer for large Apollo lunar samples.

    PubMed

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  1. A spinner magnetometer for large Apollo lunar samples

    NASA Astrophysics Data System (ADS)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  2. Characterizing dispersal patterns in a threatened seabird with limited genetic structure

    Treesearch

    Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery

    2009-01-01

    Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...

  3. Assessment of fish assemblages and minimum sampling effort required to determine botic integrity of large rivers in southern Idaho, 2002

    USGS Publications Warehouse

    Maret, Terry R.; Ott, D.S.

    2004-01-01

    width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.

  4. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    PubMed

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-05

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    PubMed

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  6. DNA nanomechanics allows direct digital detection of complementary DNA and microRNA targets.

    PubMed

    Husale, Sudhir; Persson, Henrik H J; Sahin, Ozgur

    2009-12-24

    Techniques to detect and quantify DNA and RNA molecules in biological samples have had a central role in genomics research. Over the past decade, several techniques have been developed to improve detection performance and reduce the cost of genetic analysis. In particular, significant advances in label-free methods have been reported. Yet detection of DNA molecules at concentrations below the femtomolar level requires amplified detection schemes. Here we report a unique nanomechanical response of hybridized DNA and RNA molecules that serves as an intrinsic molecular label. Nanomechanical measurements on a microarray surface have sufficient background signal rejection to allow direct detection and counting of hybridized molecules. The digital response of the sensor provides a large dynamic range that is critical for gene expression profiling. We have measured differential expressions of microRNAs in tumour samples; such measurements have been shown to help discriminate between the tissue origins of metastatic tumours. Two hundred picograms of total RNA is found to be sufficient for this analysis. In addition, the limit of detection in pure samples is found to be one attomolar. These results suggest that nanomechanical read-out of microarrays promises attomolar-level sensitivity and large dynamic range for the analysis of gene expression, while eliminating biochemical manipulations, amplification and labelling.

  7. An assessment of re-randomization methods in bark beetle (Scolytidae) trapping bioassays

    Treesearch

    Christopher J. Fettig; Christopher P. Dabney; Stepehen R. McKelvey; Robert R. Borys

    2006-01-01

    Numerous studies have explored the role of semiochemicals in the behavior of bark beetles (Scolytidae). Multiple funnel traps are often used to elucidate these behavioral responses. Sufficient sample sizes are obtained by using large numbers of traps to which treatments are randomly assigned once, or by frequent collection of trap catches and subsequent re-...

  8. Whale sharks target dense prey patches of sergestid shrimp off Tanzania

    PubMed Central

    Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.

    2015-01-01

    Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777

  9. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  10. Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engstrom, D.E.; White, M.G.; Dunaway, P.B.

    The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)

  11. Multiplex titration RT-PCR: rapid determination of gene expression patterns for a large number of genes

    NASA Technical Reports Server (NTRS)

    Nebenfuhr, A.; Lomax, T. L.

    1998-01-01

    We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.

  12. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  13. Health Outcomes among Hispanic Subgroups: Data from the National Health Interview Survey, 1992-95. Advance Data, Number 310.

    ERIC Educational Resources Information Center

    Hajat, Anjum; Lucas, Jacqueline B.; Kington, Raynard

    In this report, various health measures are compared across Hispanic subgroups in the United States. National Health Interview Survey (NHIS) data aggregated from 1992 through 1995 were analyzed. NHIS is one of the few national surveys that has a sample sufficiently large enough to allow such comparisons. Both age-adjusted and unadjusted estimates…

  14. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  15. Omega from the anisotropy of the redshift correlation function

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.

  16. Calculating p-values and their significances with the Energy Test for large datasets

    NASA Astrophysics Data System (ADS)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  17. Cancer classification through filtering progressive transductive support vector machine based on gene expression data

    NASA Astrophysics Data System (ADS)

    Lu, Xinguo; Chen, Dan

    2017-08-01

    Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.

  18. Hα and Gaia-RVS domain spectroscopy of Be stars and interacting binaries with Ondřejov 2m telescope

    NASA Astrophysics Data System (ADS)

    Koubský, P.; Kotková, L.; Votruba, V.

    2011-12-01

    A long term project to investigate the spectral appearance over the Gaia RVS domain of a large sample of Be stars and interacting binaries has been undertaken. The aim of the Ondřejov project is to create sufficient amounts of training data in the RVS wavelength domain to complement the Bp/Rp classification of Be stars which may be observed with Gaia. The project's current status is described and sample spectra in both the Hα and RVS wavelength domains are presented and discussed.

  19. Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data

    PubMed Central

    Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.

    2015-01-01

    SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914

  20. The production of multiprotein complexes in insect cells using the baculovirus expression system.

    PubMed

    Abdulrahman, Wassim; Radu, Laura; Garzoni, Frederic; Kolesnikova, Olga; Gupta, Kapil; Osz-Papai, Judit; Berger, Imre; Poterszman, Arnaud

    2015-01-01

    The production of a homogeneous protein sample in sufficient quantities is an essential prerequisite not only for structural investigations but represents also a rate-limiting step for many functional studies. In the cell, a large fraction of eukaryotic proteins exists as large multicomponent assemblies with many subunits, which act in concert to catalyze specific activities. Many of these complexes cannot be obtained from endogenous source material, so recombinant expression and reconstitution are then required to overcome this bottleneck. This chapter describes current strategies and protocols for the efficient production of multiprotein complexes in large quantities and of high quality, using the baculovirus/insect cell expression system.

  1. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  2. Evaluation of Existing Methods for Human Blood mRNA Isolation and Analysis for Large Studies

    PubMed Central

    Meyer, Anke; Paroni, Federico; Günther, Kathrin; Dharmadhikari, Gitanjali; Ahrens, Wolfgang; Kelm, Sørge; Maedler, Kathrin

    2016-01-01

    Aims Prior to implementing gene expression analyses from blood to a larger cohort study, an evaluation to set up a reliable and reproducible method is mandatory but challenging due to the specific characteristics of the samples as well as their collection methods. In this pilot study we optimized a combination of blood sampling and RNA isolation methods and present reproducible gene expression results from human blood samples. Methods The established PAXgeneTM blood collection method (Qiagen) was compared with the more recent TempusTM collection and storing system. RNA from blood samples collected by both systems was extracted on columns with the corresponding Norgen and PAX RNA extraction Kits. RNA quantity and quality was compared photometrically, with Ribogreen and by Real-Time PCR analyses of various reference genes (PPIA, β-ACTIN and TUBULIN) and exemplary of SIGLEC-7. Results Combining different sampling methods and extraction kits caused strong variations in gene expression. The use of PAXgeneTM and TempusTM collection systems resulted in RNA of good quality and quantity for the respective RNA isolation system. No large inter-donor variations could be detected for both systems. However, it was not possible to extract sufficient RNA of good quality with the PAXgeneTM RNA extraction system from samples collected by TempusTM collection tubes. Comparing only the Norgen RNA extraction methods, RNA from blood collected either by the TempusTM or PAXgeneTM collection system delivered sufficient amount and quality of RNA, but the TempusTM collection delivered higher RNA concentration compared to the PAXTM collection system. The established Pre-analytix PAXgeneTM RNA extraction system together with the PAXgeneTM blood collection system showed lowest CT-values, i.e. highest RNA concentration of good quality. Expression levels of all tested genes were stable and reproducible. Conclusions This study confirms that it is not possible to mix or change sampling or extraction strategies during the same study because of large variations of RNA yield and expression levels. PMID:27575051

  3. Differences by Degree: Evidence of the Net Financial Rates of Return to Undergraduate Study for England and Wales

    ERIC Educational Resources Information Center

    Walker, Ian; Zhu, Yu

    2011-01-01

    This paper provides estimates of the impact of higher education qualifications on the earnings of graduates in the U.K. by subject studied. We use data from the recent U.K. Labour Force Surveys which provide a sufficiently large sample to consider the effects of the subject studied, class of first degree, and postgraduate qualifications. Ordinary…

  4. Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.

    PubMed

    Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner

    2016-01-01

    Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  6. Study of Evaporation Rate of Water in Hydrophobic Confinement using Forward Flux Sampling

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Debenedetti, Pablo G.

    2012-02-01

    Drying of hydrophobic cavities is of interest in understanding biological self assembly, protein stability and opening and closing of ion channels. Liquid-to-vapor transition of water in confinement is associated with large kinetic barriers which preclude its study using conventional simulation techniques. Using forward flux sampling to study the kinetics of the transition between two hydrophobic surfaces, we show that a) the free energy barriers to evaporation scale linearly with the distance between the two surfaces, d; b) the evaporation rates increase as the lateral size of the surfaces, L increases, and c) the transition state to evaporation for sufficiently large L is a cylindrical vapor cavity connecting the two hydrophobic surfaces. Finally, we decouple the effects of confinement geometry and surface chemistry on the evaporation rates.

  7. Cat Mountain: A meteoritic sample of an impact-melted chondritic asteroid

    NASA Technical Reports Server (NTRS)

    Kring, David A.

    1993-01-01

    Although impact cratering and collisional disruption are the dominant geologic processes affecting asteroids, samples of impact melt breccias comprise less than 1 percent of ordinary chondritic material and none exist among enstatite and carbonaceous chondrite groups. Because the average collisional velocity among asteroids is sufficiently large to produce impact melts, this paucity of impact-melted material is generally believed to be a sampling bias, making it difficult to determine the evolutionary history of chondritic bodies and how impact processes may have affected the physical properties of asteroids (e.g., their structural integrity and reflectance spectra). To help address these and related issues, the first petrographic description of a new chondritic impact melt breccia sample, tentatively named Cat Mountain, is presented.

  8. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  9. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  10. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    PubMed

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Digital Archiving of People Flow by Recycling Large-Scale Social Survey Data of Developing Cities

    NASA Astrophysics Data System (ADS)

    Sekimoto, Y.; Watanabe, A.; Nakamura, T.; Horanont, T.

    2012-07-01

    Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA) PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.

  12. Pore water sampling in acid sulfate soils: a new peeper method.

    PubMed

    Johnston, Scott G; Burton, Edward D; Keene, Annabelle F; Bush, Richard T; Sullivan, Leigh A; Isaacson, Lloyd

    2009-01-01

    This study describes the design, deployment, and application of a modified equilibration dialysis device (peeper) optimized for sampling pore waters in acid sulfate soils (ASS). The modified design overcomes the limitations of traditional-style peepers, when sampling firm ASS materials over relatively large depth intervals. The new peeper device uses removable, individual cells of 25 mL volume housed in a 1.5 m long rigid, high-density polyethylene rod. The rigid housing structure allows the device to be inserted directly into relatively firm soils without requiring a supporting frame. The use of removable cells eliminates the need for a large glove-box after peeper retrieval, thus simplifying physical handling. Removable cells are easily maintained in an inert atmosphere during sample processing and the 25-mL sample volume is sufficient for undertaking multiple analyses. A field evaluation of equilibration times indicates that 32 to 38 d of deployment was necessary. Overall, the modified method is simple and effective and well suited to acquisition and processing of redox-sensitive pore water profiles>1 m deep in acid sulfate soil or any other firm wetland soils.

  13. The use of Landsat for monitoring water parameters in the coastal zone

    NASA Technical Reports Server (NTRS)

    Bowker, D. E.; Witte, W. G.

    1977-01-01

    Landsats 1 and 2 have been successful in detecting and quantifying suspended sediment and several other important parameters in the coastal zone, including chlorophyll, particles, alpha (light transmission), tidal conditions, acid and sewage dumps, and in some instances oil spills. When chlorophyll a is present in detectable quantities, however, it is shown to interfere with the measurement of sediment. The Landsat banding problem impairs the instrument resolution and places a requirement on the sampling program to collect surface data from a sufficiently large area. A sampling method which satisfies this condition is demonstrated.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Jaejin; Woo, Jong-Hak; Mulchaey, John S.

    We perform a comprehensive study of X-ray cavities using a large sample of X-ray targets selected from the Chandra archive. The sample is selected to cover a large dynamic range including galaxy clusters, groups, and individual galaxies. Using β -modeling and unsharp masking techniques, we investigate the presence of X-ray cavities for 133 targets that have sufficient X-ray photons for analysis. We detect 148 X-ray cavities from 69 targets and measure their properties, including cavity size, angle, and distance from the center of the diffuse X-ray gas. We confirm the strong correlation between cavity size and distance from the X-raymore » center similar to previous studies. We find that the detection rates of X-ray cavities are similar among galaxy clusters, groups and individual galaxies, suggesting that the formation mechanism of X-ray cavities is independent of environment.« less

  15. Human Finger-Prick Induced Pluripotent Stem Cells Facilitate the Development of Stem Cell Banking

    PubMed Central

    Tan, Hong-Kee; Toh, Cheng-Xu Delon; Ma, Dongrui; Yang, Binxia; Liu, Tong Ming; Lu, Jun; Wong, Chee-Wai; Tan, Tze-Kai; Li, Hu; Syn, Christopher; Tan, Eng-Lee; Lim, Bing; Lim, Yoon-Pin; Cook, Stuart A.

    2014-01-01

    Induced pluripotent stem cells (iPSCs) derived from somatic cells of patients can be a good model for studying human diseases and for future therapeutic regenerative medicine. Current initiatives to establish human iPSC (hiPSC) banking face challenges in recruiting large numbers of donors with diverse diseased, genetic, and phenotypic representations. In this study, we describe the efficient derivation of transgene-free hiPSCs from human finger-prick blood. Finger-prick sample collection can be performed on a “do-it-yourself” basis by donors and sent to the hiPSC facility for reprogramming. We show that single-drop volumes of finger-prick samples are sufficient for performing cellular reprogramming, DNA sequencing, and blood serotyping in parallel. Our novel strategy has the potential to facilitate the development of large-scale hiPSC banking worldwide. PMID:24646489

  16. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    PubMed

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.

  17. Bayesian geostatistics in health cartography: the perspective of malaria.

    PubMed

    Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I

    2011-06-01

    Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.

  18. Bayesian geostatistics in health cartography: the perspective of malaria

    PubMed Central

    Patil, Anand P.; Gething, Peter W.; Piel, Frédéric B.; Hay, Simon I.

    2011-01-01

    Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision. PMID:21420361

  19. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  20. Degradation of radiator performance on Mars due to dust

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Perez-Davis, Marla E.; Rutledge, Sharon K.; Forkapa, Mark

    1992-01-01

    An artificial mineral of the approximate elemental composition of Martian soil was manufactured, crushed, and sorted into four different size ranges. Dust particles from three of these size ranges were applied to arc-textured Nb-1 percent Zr and Cu radiator surfaces to assess their effect on radiator performance. Particles larger than 75 microns did not have sufficient adhesive forces to adhere to the samples at angles greater than about 27 deg. Pre-deposited dust layers were largely removed by clear wind velocities greater than 40 m/s, or by dust-laden wind velocities as low as 25 m/s. Smaller dust grains were more difficult to remove. Abrasion was found to be significant only in high velocity winds (89 m/s or greater). Dust-laden winds were found to be more abrasive than clear wind. Initially dusted samples abraded less than initially clear samples in dust laden wind. Smaller dust particles of the simulant proved to be more abrasive than large. This probably indicates that the larger particles were in fact agglomerates.

  1. Bridging the gap between sample collection and laboratory analysis: using dried blood spots to identify human exposure to chemical agents

    NASA Astrophysics Data System (ADS)

    Hamelin, Elizabeth I.; Blake, Thomas A.; Perez, Jonas W.; Crow, Brian S.; Shaner, Rebecca L.; Coleman, Rebecca M.; Johnson, Rudolph C.

    2016-05-01

    Public health response to large scale chemical emergencies presents logistical challenges for sample collection, transport, and analysis. Diagnostic methods used to identify and determine exposure to chemical warfare agents, toxins, and poisons traditionally involve blood collection by phlebotomists, cold transport of biomedical samples, and costly sample preparation techniques. Use of dried blood spots, which consist of dried blood on an FDA-approved substrate, can increase analyte stability, decrease infection hazard for those handling samples, greatly reduce the cost of shipping/storing samples by removing the need for refrigeration and cold chain transportation, and be self-prepared by potentially exposed individuals using a simple finger prick and blood spot compatible paper. Our laboratory has developed clinical assays to detect human exposures to nerve agents through the analysis of specific protein adducts and metabolites, for which a simple extraction from a dried blood spot is sufficient for removing matrix interferents and attaining sensitivities on par with traditional sampling methods. The use of dried blood spots can bridge the gap between the laboratory and the field allowing for large scale sample collection with minimal impact on hospital resources while maintaining sensitivity, specificity, traceability, and quality requirements for both clinical and forensic applications.

  2. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  3. Automation practices in large molecule bioanalysis: recommendations from group L5 of the global bioanalytical consortium.

    PubMed

    Ahene, Ago; Calonder, Claudio; Davis, Scott; Kowalchick, Joseph; Nakamura, Takahiro; Nouri, Parya; Vostiar, Igor; Wang, Yang; Wang, Jin

    2014-01-01

    In recent years, the use of automated sample handling instrumentation has come to the forefront of bioanalytical analysis in order to ensure greater assay consistency and throughput. Since robotic systems are becoming part of everyday analytical procedures, the need for consistent guidance across the pharmaceutical industry has become increasingly important. Pre-existing regulations do not go into sufficient detail in regard to how to handle the use of robotic systems for use with analytical methods, especially large molecule bioanalysis. As a result, Global Bioanalytical Consortium (GBC) Group L5 has put forth specific recommendations for the validation, qualification, and use of robotic systems as part of large molecule bioanalytical analyses in the present white paper. The guidelines presented can be followed to ensure that there is a consistent, transparent methodology that will ensure that robotic systems can be effectively used and documented in a regulated bioanalytical laboratory setting. This will allow for consistent use of robotic sample handling instrumentation as part of large molecule bioanalysis across the globe.

  4. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  5. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  6. Method of quantitating dsDNA

    DOEpatents

    Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.

    2002-01-01

    A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.

  7. No independent association between insufficient sleep and childhood obesity in the National Survey of Children's Health.

    PubMed

    Hassan, Fauziya; Davis, Matthew M; Chervin, Ronald D

    2011-04-15

    Prior studies have supported an association between insufficient sleep and childhood obesity, but most have not examined nationally representative samples or considered potential sociodemographic confounders. The main objective of this study was to use a large, nationally representative dataset to examine the possibility that insufficient sleep is associated with obesity in children, independent of sociodemographic factors. The National Survey of Children's Health is a national survey of U.S. households contacted by random digit dialing. In 2003, caregivers of 102,353 US children were surveyed. Age- and sex-specific body mass index (BMI) based on parental report of child height and weight, was available for 81,390 children aged 6-17 years. Caregivers were asked, "How many nights of sufficient sleep did your child have in the past week?" The odds of obesity (BMI ≥ 95th percentile) versus healthy weight (BMI 5th-84th percentile) was regressed on reported nights of sufficient sleep per week (categorized as 0-2, 3-5, or 6-7). Sociodemographic variables included gender, race, household education, and family income. Analyses incorporated sampling weights to derive nationally representative estimates for a 2003 population of 34 million youth. Unadjusted bivariate analyses indicated that children aged 6-11 years with 0-2 nights of sufficient sleep, in comparison to those with 6-7 nights, were more likely to be obese (OR = 1.7, 95% CI [1.2-2.3]). Among children aged 12-17 years, odds of obesity were lower among children with 3-5 nights of sufficient sleep in comparison to those with 6-7 nights (0.8, 95% CI: 0.7-0.9). However, in both age groups, adjustment for race/ethnicity, gender, family income, and household education left no remaining statistical significance for the association between sufficient nights of sleep and BMI. In this national sample, insufficient sleep, as judged by parents, is inconsistently associated with obesity in bivariate analyses, and not associated with obesity after adjustment for sociodemographic variables. These findings from a nationally representative sample are necessarily subject to parental perceptions, but nonetheless serve as an important reminder that the role of insufficient sleep in the childhood obesity epidemic remains unproven.

  8. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  9. PASSIM--an open source software system for managing information in biomedical studies.

    PubMed

    Viksna, Juris; Celms, Edgars; Opmanis, Martins; Podnieks, Karlis; Rucevskis, Peteris; Zarins, Andris; Barrett, Amy; Neogi, Sudeshna Guha; Krestyaninova, Maria; McCarthy, Mark I; Brazma, Alvis; Sarkans, Ugis

    2007-02-09

    One of the crucial aspects of day-to-day laboratory information management is collection, storage and retrieval of information about research subjects and biomedical samples. An efficient link between sample data and experiment results is absolutely imperative for a successful outcome of a biomedical study. Currently available software solutions are largely limited to large-scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but often implies sufficient investment of time, effort and funds, which are not always available. There is a clear need for lightweight open source systems for patient and sample information management. We present a web-based tool for submission, management and retrieval of sample and research subject data. The system secures confidentiality by separating anonymized sample information from individuals' records. It is simple and generic, and can be customised for various biomedical studies. Information can be both entered and accessed using the same web interface. User groups and their privileges can be defined. The system is open-source and is supplied with an on-line tutorial and necessary documentation. It has proven to be successful in a large international collaborative project. The presented system closes the gap between the need and the availability of lightweight software solutions for managing information in biomedical studies involving human research subjects.

  10. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.

  11. Women's health: periodontitis and its relation to hormonal changes, adverse pregnancy outcomes and osteoporosis.

    PubMed

    Krejci, Charlene B; Bissada, Nabil F

    2012-01-01

    To examine the literature with respect to periodontitis and issues specific to women's health, namely, hormonal changes, adverse pregnancy outcomes and osteoporosis. The literature was evaluated to review reported associations between periodontitis and genderspecific issues, namely, hormonal changes, adverse pregnancy outcomes and osteoporosis. Collectively, the literature provided a large body of evidence that supports various associations between periodontitis and hormonal changes, adverse pregnancy outcomes and osteoporosis; however, certain shortcomings were noted with respect to biases involving definitions, sample sizes and confounding variables. Specific cause and effect relationships could not be delineated at this time and neither could definitive treatment interventions. Future research must include randomised controlled trials with consistent definitions, adequate controls and sufficiently large sample sizes in order to clarify specific associations, identify cause and effect relationships, define treatment options and determine treatment interventions which will lessen the untoward effects on the at-risk populations.

  12. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  13. Electrofishing effort required to estimate biotic condition in southern Idaho Rivers

    USGS Publications Warehouse

    Maret, Terry R.; Ott, Douglas S.; Herlihy, Alan T.

    2007-01-01

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions.

  14. Variability in the vacuum-ultraviolet transmittance of magnesium fluoride windows. [for Space Telescope Imaging Spectrograph

    NASA Technical Reports Server (NTRS)

    Herzig, Howard; Fleetwood, Charles M., Jr.; Toft, Albert R.

    1992-01-01

    Sample window materials tested during the development of a domed magnesium fluoride detector window for the Hubble Space Telescope's Imaging Spectrograph are noted to exhibit wide variability in VUV transmittance; a test program was accordingly instituted to maximize a prototype domed window's transmittance. It is found that VUV transmittance can be maximized if the boule from which the window is fashioned is sufficiently large to allow such a component to be cut from the purest available portion of the boule.

  15. Soil moisture estimation using reflected solar and emitted thermal infrared radiation

    NASA Technical Reports Server (NTRS)

    Jackson, R. D.; Cihlar, J.; Estes, J. E.; Heilman, J. L.; Kahle, A.; Kanemasu, E. T.; Millard, J.; Price, J. C.; Wiegand, C. L.

    1978-01-01

    Classical methods of measuring soil moisture such as gravimetric sampling and the use of neutron moisture probes are useful for cases where a point measurement is sufficient to approximate the water content of a small surrounding area. However, there is an increasing need for rapid and repetitive estimations of soil moisture over large areas. Remote sensing techniques potentially have the capability of meeting this need. The use of reflected-solar and emitted thermal-infrared radiation, measured remotely, to estimate soil moisture is examined.

  16. Peak-Flux-Density Spectra of Large Solar Radio Bursts and Proton Emission from Flares.

    DTIC Science & Technology

    1985-08-19

    of the microwave peak (Z 1000 sfu in U-bursts) served as an indicator that the energy release during the impulsive phase was sufficient to produce a... energy or wave- length tends to be prominent in all, and cautions about over-interpreting associa- tions/correlations observed in samples of big flares...Sung, L. S., and McDonald, F. B. (1975) The variation of solar proton energy spectra and size distribution with helio- longitude, Sol. Phys. 41: 189. 28

  17. PCR-based detection of Toxoplasma gondii DNA in blood and ocular samples for diagnosis of ocular toxoplasmosis.

    PubMed

    Bourdin, C; Busse, A; Kouamou, E; Touafek, F; Bodaghi, B; Le Hoang, P; Mazier, D; Paris, L; Fekkar, A

    2014-11-01

    PCR detection of Toxoplasma gondii in blood has been suggested as a possibly efficient method for the diagnosis of ocular toxoplasmosis (OT) and furthermore for genotyping the strain involved in the disease. To assess this hypothesis, we performed PCR with 121 peripheral blood samples from 104 patients showing clinical and/or biological evidence of ocular toxoplasmosis and from 284 (258 patients) controls. We tested 2 different extraction protocols, using either 200 μl (small volume) or 2 ml (large volume) of whole blood. Sensitivity was poor, i.e., 4.1% and 25% for the small- and large-volume extractions, respectively. In comparison, PCR with ocular samples yielded 35.9% sensitivity, while immunoblotting and calculation of the Goldmann-Witmer coefficient yielded 47.6% and 72.3% sensitivities, respectively. Performing these three methods together provided 89.4% sensitivity. Whatever the origin of the sample (ocular or blood), PCR provided higher sensitivity for immunocompromised patients than for their immunocompetent counterparts. Consequently, PCR detection of Toxoplasma gondii in blood samples cannot currently be considered a sufficient tool for the diagnosis of OT, and ocular sampling remains necessary for the biological diagnosis of OT. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  18. Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.

    PubMed

    Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B

    1994-10-01

    A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.

  19. The global topography of Bennu: altimetry, photoclinometry, and processing

    NASA Astrophysics Data System (ADS)

    Perry, M. E.; Barnouin, O. S.; Daly, M. G.; Seabrook, J.; Palmer, E. E.; Gaskell, R. W.; Craft, K. L.; Roberts, J. H.; Philpott, L.; Asad, M. Al; Johnson, C. L.; Nair, A. H.; Espiritu, R. C.; Nolan, M. C.; Lauretta, D. S.

    2017-09-01

    The Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) mission will spend two years observing (101955) Bennu and will then return pristine samples of carbonaceous material from the asteroid [1]. Launched in September 2016, OSIRISREx arrives at Bennu in August 2018, acquires a sample in July 2020, and returns the sample to Earth in September 2023. The instruments onboard OSIRIS-REx will measure the physical and chemical properties of this B-class asteroid, a subclass within the larger group of C-complex asteroids that might be organic-rich. At approximately 500m in average diameter [2], Bennu is sufficiently large to retain substantial regolith and as an Apollo asteroid with a low inclination (6°), it is one of the most accessible primitive near-Earth asteroid.

  20. Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.

    PubMed

    Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne

    2018-06-11

    Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.

  1. SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    PubMed Central

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-01-01

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146

  2. Rupture of a highly stretchable acrylic dielectric elastomer

    NASA Astrophysics Data System (ADS)

    Pharr, George; Sun, Jeong-Yun; Suo, Zhigang

    2012-02-01

    Dielectric elastomers have found widespread application as energy harvesters, actuators, and sensors. In practice these elastomers are subject to large tensile stretches, which potentially can lead to mechanical fracture. In this study, we have examined fracture properties of the commercial acrylic elastomer VHB 4905. We have found that inserting a pre-cut into the material drastically reduces the stretch at rupture from λrup = 9.43±1.05 for pristine samples down to only λrup = 3.63±0.45 for the samples with a pre-cut. Furthermore, using ``pure-shear'' test specimens with a pre-crack, we have measured the fracture energy and stretch at rupture as a function of the sample geometry. The stretch at rupture was found to decrease with sample height, which agrees with an analytical prediction. Additionally, we have measured the fracture energy as a function of stretch-rate. The apparent fracture energy was found to increase with stretch-rate from γ 1500 J/m^2 to γ 5000 J/m^2 for the investigated rates of deformation. This phenomenon is due to viscoelastic properties of VHB 4905, which result in an apparent stiffening for sufficiently large stretch-rates.

  3. SaDA: From Sampling to Data Analysis-An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data.

    PubMed

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-06-03

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.

  4. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  5. Patterns of Spatial Variation of Assemblages Associated with Intertidal Rocky Shores: A Global Perspective

    PubMed Central

    Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa

    2010-01-01

    Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546

  6. Reflections on experimental research in medical education.

    PubMed

    Cook, David A; Beckman, Thomas J

    2010-08-01

    As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.

  7. A High-Precision Counter Using the DSP Technique

    DTIC Science & Technology

    2004-09-01

    DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this

  8. Rainfall, Streamflow, and Water-Quality Data During Stormwater Monitoring, Halawa Stream Drainage Basin, Oahu, Hawaii, July 1, 2000 to June 30, 2001

    USGS Publications Warehouse

    Presley, Todd K.

    2001-01-01

    The State of Hawaii Department of Transportation Stormwater Monitoring Program was implemented on January 1, 2001. The program includes the collection of rainfall, streamflow, and water-quality data at selected sites in the Halawa Stream drainage basin. Rainfall and streamflow data were collected from July 1, 2000 to June 30, 2001. Few storms during the year met criteria for antecedent dry conditions or provided enough runoff to sample. The storm of June 5, 2001 was sufficiently large to cause runoff. On June 5, 2001, grab samples were collected at five sites along North Halawa and Halawa Streams. The five samples were later analyzed for nutrients, trace metals, oil and grease, total petroleum hydrocarbons, fecal coliform, biological and chemical oxygen demands, total suspended solids, and total dissolved solids.

  9. Distributed MPC based consensus for single-integrator multi-agent systems.

    PubMed

    Cheng, Zhaomeng; Fan, Ming-Can; Zhang, Hai-Tao

    2015-09-01

    This paper addresses model predictive control schemes for consensus in multi-agent systems (MASs) with discrete-time single-integrator dynamics under switching directed interaction graphs. The control horizon is extended to be greater than one which endows the closed-loop system with extra degree of freedom. We derive sufficient conditions on the sampling period and the interaction graph to achieve consensus by using the property of infinite products of stochastic matrices. Consensus can be achieved asymptotically if the sampling period is selected such that the interaction graph among agents has a directed spanning tree jointly. Significantly, if the interaction graph always has a spanning tree, one can select an arbitrary large sampling period to guarantee consensus. Finally, several simulations are conducted to illustrate the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Injection current minimization of InAs/InGaAs quantum dot laser by optimization of its active region and reflectivity of laser cavity edges

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Maximov, M. V.

    2015-11-01

    The ways to optimize key parameters of active region and edge reflectivity of edge- emitting semiconductor quantum dot laser are provided. It is shown that in the case of optimal cavity length and sufficiently large dispersion lasing spectrum of a given width can be obtained at injection current up to an order of magnitude lower in comparison to non-optimized sample. The influence of internal loss and edge reflection is also studied in details.

  11. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  12. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  13. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  14. Sampled-data chain-observer design for a class of delayed nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kahelras, M.; Ahmed-Ali, T.; Giri, F.; Lamnabhi-Lagarrigue, F.

    2018-05-01

    The problem of observer design is addressed for a class of triangular nonlinear systems with not-necessarily small delay and sampled output measurements. One more difficulty is that the system state matrix is dependent on the un-delayed output signal which is not accessible to measurement, making existing observers inapplicable. A new chain observer, composed of m elementary observers in series, is designed to compensate for output sampling and arbitrary large delays. The larger the time-delay the larger the number m. Each elementary observer includes an output predictor that is conceived to compensate for the effects of output sampling and a fractional delay. The predictors are defined by first-order ordinary differential equations (ODEs) much simpler than those of existing predictors which involve both output and state predictors. Using a small gain type analysis, sufficient conditions for the observer to be exponentially convergent are established in terms of the minimal number m of elementary observers and the maximum sampling interval.

  15. Rapid assessment of target species: Byssate bivalves in a large tropical port.

    PubMed

    Minchin, Dan; Olenin, Sergej; Liu, Ta-Kang; Cheng, Muhan; Huang, Sheng-Chih

    2016-11-15

    Rapid assessment sampling for target species is a fast cost-effective method aimed at determining the presence, abundance and distribution of alien and native harmful aquatic organisms and pathogens that may have been introduced by shipping. In this study, the method was applied within a large tropical port expected to have a high species diversity. The port of Kaohsiung was sampled for bivalve molluscan species that attach using a byssus. Such species, due to their biological traits, are spread by ships to ports worldwide. We estimated the abundance and distribution range of one dreissenid (Mytilopsis sallei) and four mytilids (Brachidontes variabilis, Arcuatula senhousa, Mytilus galloprovincialis, Perna viridis) known to be successful invaders and identified as potential pests, or high-risk harmful native or non-native species. We conclude that a rapid assessment of their abundance and distribution within a port, and its vicinity, is efficient and can provide sufficient information for decision making by port managers where IMO port exemptions may be sought. Copyright © 2016. Published by Elsevier Ltd.

  16. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  17. Diagnosing prosopagnosia in East Asian individuals: Norms for the Cambridge Face Memory Test-Chinese.

    PubMed

    McKone, Elinor; Wan, Lulu; Robbins, Rachel; Crookes, Kate; Liu, Jia

    2017-07-01

    The Cambridge Face Memory Test (CFMT) is widely accepted as providing a valid and reliable tool in diagnosing prosopagnosia (inability to recognize people's faces). Previously, large-sample norms have been available only for Caucasian-face versions, suitable for diagnosis in Caucasian observers. These are invalid for observers of different races due to potentially severe other-race effects. Here, we provide large-sample norms (N = 306) for East Asian observers on an Asian-face version (CFMT-Chinese). We also demonstrate methodological suitability of the CFMT-Chinese for prosopagnosia diagnosis (high internal reliability, approximately normal distribution, norm-score range sufficiently far above chance). Additional findings were a female advantage on mean performance, plus a difference between participants living in the East (China) or the West (international students, second-generation children of immigrants), which we suggest might reflect personality differences associated with willingness to emigrate. Finally, we demonstrate suitability of the CFMT-Chinese for individual differences studies that use correlations within the normal range.

  18. A Chandra Snapshot Survey of Extremely Red Quasars from SDSS BOSS and WISE

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2017-09-01

    We propose Chandra snapshot observations of a sample of 15 extremely red and highly luminous quasars at z > 2. These Type 1 objects have recently been discovered via the SDSS BOSS and WISE surveys, and they are among the most-luminous quasars in the Universe. They appear to be part of the missing evolutionary link as merger-induced starburst galaxies transform into typical ultraviolet luminous quasars. Our aim is to efficiently gather X-ray information about a sufficiently large sample of these objects that general conclusions about their basic X-ray properties, especially obscuration level and luminosity, can be drawn reliably. The results will also allow effective targeting of promising objects in longer X-ray spectroscopic observations.

  19. Concentration and separation of biological organisms by ultrafiltration and dielectrophoresis

    DOEpatents

    Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.

    2010-10-12

    Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.

  20. A method for release and multiple strand amplification of small quantities of DNA from endospores of the fastidious bacterium Pasteuria penetrans.

    PubMed

    Mauchline, T H; Mohan, S; Davies, K G; Schaff, J E; Opperman, C H; Kerry, B R; Hirsch, P R

    2010-05-01

    To establish a reliable protocol to extract DNA from Pasteuria penetrans endospores for use as template in multiple strand amplification, thus providing sufficient material for genetic analyses. To develop a highly sensitive PCR-based diagnostic tool for P. penetrans. An optimized method to decontaminate endospores, release and purify DNA enabled multiple strand amplification. DNA purity was assessed by cloning and sequencing gyrB and 16S rRNA gene fragments obtained from PCR using generic primers. Samples indicated to be 100%P. penetrans by the gyrB assay were estimated at 46% using the 16S rRNA gene. No bias was detected on cloning and sequencing 12 housekeeping and sporulation gene fragments from amplified DNA. The detection limit by PCR with Pasteuria-specific 16S rRNA gene primers following multiple strand amplification of DNA extracted using the method was a single endospore. Generation of large quantities DNA will facilitate genomic sequencing of P. penetrans. Apparent differences in sample purity are explained by variations in 16S rRNA gene copy number in Eubacteria leading to exaggerated estimations of sample contamination. Detection of single endospores will facilitate investigations of P. penetrans molecular ecology. These methods will advance studies on P. penetrans and facilitate research on other obligate and fastidious micro-organisms where it is currently impractical to obtain DNA in sufficient quantity and quality.

  1. Integrating scales of seagrass monitoring to meet conservation needs

    USGS Publications Warehouse

    Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.

    2012-01-01

    We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.

  2. High axial resolution imaging system for large volume tissues using combination of inclined selective plane illumination and mechanical sectioning

    PubMed Central

    Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun

    2017-01-01

    To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503

  3. Human iPSC-derived neurons and lymphoblastoid cells for personalized medicine research in neuropsychiatric disorders.

    PubMed

    Gurwitz, David

    2016-09-01

    The development and clinical implementation of personalized medicine crucially depends on the availability of high-quality human biosamples; animal models, although capable of modeling complex human diseases, cannot reflect the large variation in the human genome, epigenome, transcriptome, proteome, and metabolome. Although the biosamples available from public biobanks that store human tissues and cells may represent the large human diversity for most diseases, these samples are not always sufficient for developing biomarkers for patient-tailored therapies for neuropsychiatric disorders. Postmortem human tissues are available from many biobanks; nevertheless, collections of neuronal human cells from large patient cohorts representing the human diversity remain scarce. Two tools are gaining popularity for personalized medicine research on neuropsychiatric disorders: human induced pluripotent stem cell-derived neurons and human lymphoblastoid cell lines. This review examines and contrasts the advantages and limitations of each tool for personalized medicine research.

  4. Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology

    NASA Astrophysics Data System (ADS)

    Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim

    2016-09-01

    Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.

  5. On the use of total aerobic spore bacteria to make treatment decisions due to Cryptosporidium risk at public water system wells.

    PubMed

    Berger, Philip; Messner, Michael J; Crosby, Jake; Vacs Renwick, Deborah; Heinrich, Austin

    2018-05-01

    Spore reduction can be used as a surrogate measure of Cryptosporidium natural filtration efficiency. Estimates of log10 (log) reduction were derived from spore measurements in paired surface and well water samples in Casper Wyoming and Kearney Nebraska. We found that these data were suitable for testing the hypothesis (H 0 ) that the average reduction at each site was 2 log or less, using a one-sided Student's t-test. After establishing data quality objectives for the test (expressed as tolerable Type I and Type II error rates), we evaluated the test's performance as a function of the (a) true log reduction, (b) number of paired samples assayed and (c) variance of observed log reductions. We found that 36 paired spore samples are sufficient to achieve the objectives over a wide range of variance, including the variances observed in the two data sets. We also explored the feasibility of using smaller numbers of paired spore samples to supplement bioparticle counts for screening purposes in alluvial aquifers, to differentiate wells with large volume surface water induced recharge from wells with negligible surface water induced recharge. With key assumptions, we propose a normal statistical test of the same hypothesis (H 0 ), but with different performance objectives. As few as six paired spore samples appear adequate as a screening metric to supplement bioparticle counts to differentiate wells in alluvial aquifers with large volume surface water induced recharge. For the case when all available information (including failure to reject H 0 based on the limited paired spore data) leads to the conclusion that wells have large surface water induced recharge, we recommend further evaluation using additional paired biweekly spore samples. Published by Elsevier GmbH.

  6. Complex magnetic susceptibility setup for spectroscopy in the extremely low-frequency range.

    PubMed

    Kuipers, B W M; Bakelaar, I A; Klokkenburg, M; Erné, B H

    2008-01-01

    A sensitive balanced differential transformer was built to measure complex initial parallel magnetic susceptibility spectra in the 0.01-1000 Hz range. The alternating magnetic field can be chosen sufficiently weak that the magnetic structure of the samples is only slightly perturbed and the low frequencies make it possible to study the rotational dynamics of large magnetic colloidal particles or aggregates dispersed in a liquid. The distinguishing features of the setup are the novel multilayered cylindrical coils with a large sample volume and a large number of secondary turns (55 000) to measure induced voltages with a good signal-to-noise ratio, the use of a dual channel function generator to provide an ac current to the primary coils and an amplitude- and phase-adjusted compensation voltage to the dual phase differential lock-in amplifier, and the measurement of several vector quantities at each frequency. We present the electrical impedance characteristics of the coils, and we demonstrate the performance of the setup by measurement on magnetic colloidal dispersions covering a wide range of characteristic relaxation frequencies and magnetic susceptibilities, from chi approximately -10(-5) for pure water to chi>1 for concentrated ferrofluids.

  7. Falsifiability is not optional.

    PubMed

    LeBel, Etienne P; Berger, Derek; Campbell, Lorne; Loving, Timothy J

    2017-08-01

    Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016's position that "bigger samples are generally better, but . . . that very large samples could have the downside of commandeering resources that would have been better invested in other studies" (abstract). We identify problematic assumptions involved in FER2016's modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  9. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    NASA Technical Reports Server (NTRS)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  10. The brightest galaxies in the first 700 Myr: Building Hubble's legacy of large area IR imaging for JWST and beyond

    NASA Astrophysics Data System (ADS)

    Trenti, Michele

    2017-08-01

    Hubble's WFC3 has been a game changer for the study of early galaxy formation in the first 700 Myr after the Big Bang. Reliable samples of sources to redshift z 11, which can be discovered only from space, are now constraining the evolution of the galaxy luminosity function into the epoch of reionization. Unexpectedly but excitingly, the recent spectroscopic confirmations of L>L* galaxies at z>8.5 demonstrate that objects brighter than our own Galaxy are already present 500 Myr after the Big Bang, creating a challenge to current theoretical/numerical models that struggle to explain how galaxies can grow so luminous so quickly. Yet, the existing HST observations do not cover sufficient area, nor sample a large enough diversity of environments to provide an unbiased sample of sources, especially at z 9-11 where only a handful of bright candidates are known. To double this currently insufficient sample size, to constrain effectively the bright-end of the galaxy luminosity function at z 9-10, and to provide targets for follow-up imaging and spectroscopy with JWST, we propose a large-area pure-parallel survey that will discover the Brightest of Reionizing Galaxies (BoRG[4JWST]). We will observe 580 arcmin^2 over 125 sightlines in five WFC3 bands (0.35 to 1.7 micron) using high-quality pure-parallel opportunities available in the cycle (3 orbits or longer). These public observations will identify more than 80 intrinsically bright galaxies at z 8-11, investigate the connection between halo mass, star formation and feedback in progenitors of groups and clusters, and build HST lasting legacy of large-area, near-IR imaging.

  11. Method for concentration and separation of biological organisms by ultrafiltration and dielectrophoresis

    DOEpatents

    Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.

    2012-09-04

    Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.

  12. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  13. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  14. Challenges of microtome‐based serial block‐face scanning electron microscopy in neuroscience

    PubMed Central

    WANNER, A. A.; KIRSCHMANN, M. A.

    2015-01-01

    Summary Serial block‐face scanning electron microscopy (SBEM) is becoming increasingly popular for a wide range of applications in many disciplines from biology to material sciences. This review focuses on applications for circuit reconstruction in neuroscience, which is one of the major driving forces advancing SBEM. Neuronal circuit reconstruction poses exceptional challenges to volume EM in terms of resolution, field of view, acquisition time and sample preparation. Mapping the connections between neurons in the brain is crucial for understanding information flow and information processing in the brain. However, information on the connectivity between hundreds or even thousands of neurons densely packed in neuronal microcircuits is still largely missing. Volume EM techniques such as serial section TEM, automated tape‐collecting ultramicrotome, focused ion‐beam scanning electron microscopy and SBEM (microtome serial block‐face scanning electron microscopy) are the techniques that provide sufficient resolution to resolve ultrastructural details such as synapses and provides sufficient field of view for dense reconstruction of neuronal circuits. While volume EM techniques are advancing, they are generating large data sets on the terabyte scale that require new image processing workflows and analysis tools. In this review, we present the recent advances in SBEM for circuit reconstruction in neuroscience and an overview of existing image processing and analysis pipelines. PMID:25907464

  15. Use of Activity-Based Probes to Develop High Throughput Screening Assays That Can Be Performed in Complex Cell Extracts

    PubMed Central

    Deu, Edgar; Yang, Zhimou; Wang, Flora; Klemba, Michael; Bogyo, Matthew

    2010-01-01

    Background High throughput screening (HTS) is one of the primary tools used to identify novel enzyme inhibitors. However, its applicability is generally restricted to targets that can either be expressed recombinantly or purified in large quantities. Methodology and Principal Findings Here, we described a method to use activity-based probes (ABPs) to identify substrates that are sufficiently selective to allow HTS in complex biological samples. Because ABPs label their target enzymes through the formation of a permanent covalent bond, we can correlate labeling of target enzymes in a complex mixture with inhibition of turnover of a substrate in that same mixture. Thus, substrate specificity can be determined and substrates with sufficiently high selectivity for HTS can be identified. In this study, we demonstrate this method by using an ABP for dipeptidyl aminopeptidases to identify (Pro-Arg)2-Rhodamine as a specific substrate for DPAP1 in Plasmodium falciparum lysates and Cathepsin C in rat liver extracts. We then used this substrate to develop highly sensitive HTS assays (Z’>0.8) that are suitable for use in screening large collections of small molecules (i.e >300,000) for inhibitors of these proteases. Finally, we demonstrate that it is possible to use broad-spectrum ABPs to identify target-specific substrates. Conclusions We believe that this approach will have value for many enzymatic systems where access to large amounts of active enzyme is problematic. PMID:20700487

  16. Determination of Coherency and Rigidity Temperatures in Al-Cu Alloys Using In Situ Neutron Diffraction During Casting

    NASA Astrophysics Data System (ADS)

    Drezet, Jean-Marie; Mireux, Bastien; Szaraz, Zoltan; Pirling, Thilo

    2014-08-01

    The rigidity temperature of a solidifying alloy is the temperature at which the solid phase is sufficiently coalesced to transmit tensile stress. It is a major input parameter in numerical modeling of solidification processes as it defines the point at which thermally induced deformations start to generate internal stresses in a casting. This temperature has been determined for an Al-13 wt.% Cu alloy using in situ neutron diffraction during casting in a dog-bone-shaped mold. This setup allows the sample to build up internal stress naturally as its contraction is not possible. The cooling on both sides of the mold induces a hot spot at the middle of the sample that is irradiated by neutrons. Diffraction patterns are recorded every 11 s using a large detector, and the very first change of diffraction angles allows for the determination of the rigidity temperature. We measured rigidity temperatures equal to 557°C and 548°C depending on the cooling rate for grain refined Al-13 wt.% Cu alloys. At a high cooling rate, rigidity is reached during the formation of the eutectic phase. In this case, the solid phase is not sufficiently coalesced to sustain tensile load and thus cannot avoid hot tear formation.

  17. Geoscience Education Research Methods: Thinking About Sample Size

    NASA Astrophysics Data System (ADS)

    Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research

    2011-12-01

    Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.

  18. Plasma phenylalanine and tyrosine responses to different nutritional conditions (fasting/postprandial) in patients with phenylketonuria: effect of sample timing.

    PubMed

    van Spronsen, F J; van Rijn, M; van Dijk, T; Smit, G P; Reijngoud, D J; Berger, R; Heymans, H S

    1993-10-01

    To evaluate the adequacy of dietary treatment in patients with phenylketonuria, the monitoring of plasma phenylalanine and tyrosine concentrations is of great importance. The preferable time of blood sampling in relation to the nutritional condition during the day, however, is not known. It was the aim of this study to define guidelines for the timing of blood sampling with a minimal burden for the patient. Plasma concentrations of phenylalanine and tyrosine were measured in nine patients with phenylketonuria who had no clinical evidence of tyrosine deficiency. These values were measured during the day both after a prolonged overnight fast, and before and after breakfast. Phenylalanine showed a small rise during prolonged fasting, while tyrosine decreased slightly. After an individually tailored breakfast, phenylalanine remained stable, while tyrosine showed large fluctuations. It is concluded that the patient's nutritional condition (fasting/postprandial) is not important in the evaluation of the phenylalanine intake. To detect a possible tyrosine deficiency, however, a single blood sample is not sufficient and a combination of a preprandial and postprandial blood sample on the same day is advocated.

  19. Optimization of sampling parameters for collection and preconcentration of alveolar air by needle traps.

    PubMed

    Filipiak, Wojciech; Filipiak, Anna; Ager, Clemens; Wiesenhofer, Helmut; Amann, Anton

    2012-06-01

    The approach for breath-VOCs' collection and preconcentration by applying needle traps was developed and optimized. The alveolar air was collected from only a few exhalations under visual control of expired CO(2) into a large gas-tight glass syringe and then warmed up to 45 °C for a short time to avoid condensation. Subsequently, a specially constructed sampling device equipped with Bronkhorst® electronic flow controllers was used for automated adsorption. This sampling device allows time-saving collection of expired/inspired air in parallel onto three different needle traps as well as improvement of sensitivity and reproducibility of NT-GC-MS analysis by collection of relatively large (up to 150 ml) volume of exhaled breath. It was shown that the collection of alveolar air derived from only a few exhalations into a large syringe followed by automated adsorption on needle traps yields better results than manual sorption by up/down cycles with a 1 ml syringe, mostly due to avoided condensation and electronically controlled stable sample flow rate. The optimal profile and composition of needle traps consists of 2 cm Carbopack X and 1 cm Carboxen 1000, allowing highly efficient VOCs' enrichment, while injection by a fast expansive flow technique requires no modifications in instrumentation and fully automated GC-MS analysis can be performed with a commercially available autosampler. This optimized analytical procedure considerably facilitates the collection and enrichment of alveolar air, and is therefore suitable for application at the bedside of critically ill patients in an intensive care unit. Due to its simplicity it can replace the time-consuming sampling of sufficient breath volume by numerous up/down cycles with a 1 ml syringe.

  20. Concentration of Enteroviruses, Adenoviruses, and Noroviruses from Drinking Water by Use of Glass Wool Filters▿

    PubMed Central

    Lambertini, Elisabetta; Spencer, Susan K.; Bertz, Phillip D.; Loge, Frank J.; Kieke, Burney A.; Borchardt, Mark A.

    2008-01-01

    Available filtration methods to concentrate waterborne viruses are either too costly for studies requiring large numbers of samples, limited to small sample volumes, or not very portable for routine field applications. Sodocalcic glass wool filtration is a cost-effective and easy-to-use method to retain viruses, but its efficiency and reliability are not adequately understood. This study evaluated glass wool filter performance to concentrate the four viruses on the U.S. Environmental Protection Agency contaminant candidate list, i.e., coxsackievirus, echovirus, norovirus, and adenovirus, as well as poliovirus. Total virus numbers recovered were measured by quantitative reverse transcription-PCR (qRT-PCR); infectious polioviruses were quantified by integrated cell culture (ICC)-qRT-PCR. Recovery efficiencies averaged 70% for poliovirus, 14% for coxsackievirus B5, 19% for echovirus 18, 21% for adenovirus 41, and 29% for norovirus. Virus strain and water matrix affected recovery, with significant interaction between the two variables. Optimal recovery was obtained at pH 6.5. No evidence was found that water volume, filtration rate, and number of viruses seeded influenced recovery. The method was successful in detecting indigenous viruses in municipal wells in Wisconsin. Long-term continuous filtration retained viruses sufficiently for their detection for up to 16 days after seeding for qRT-PCR and up to 30 days for ICC-qRT-PCR. Glass wool filtration is suitable for large-volume samples (1,000 liters) collected at high filtration rates (4 liters min−1), and its low cost makes it advantageous for studies requiring large numbers of samples. PMID:18359827

  1. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Defining a stable water isotope framework for isotope hydrology application in a large trans-boundary watershed (Russian Federation/Ukraine).

    PubMed

    Vystavna, Yuliya; Diadin, Dmytro; Huneau, Frédéric

    2018-05-01

    Stable isotopes of hydrogen ( 2 H) and oxygen ( 18 O) of the water molecule were used to assess the relationship between precipitation, surface water and groundwater in a large Russia/Ukraine trans-boundary river basin. Precipitation was sampled from November 2013 to February 2015, and surface water and groundwater were sampled during high and low flow in 2014. A local meteoric water line was defined for the Ukrainian part of the basin. The isotopic seasonality in precipitation was evident with depletion in heavy isotopes in November-March and an enrichment in April-October, indicating continental and temperature effects. Surface water was enriched in stable water isotopes from upstream to downstream sites due to progressive evaporation. Stable water isotopes in groundwater indicated that recharge occurs mainly during winter and spring. A one-year data set is probably not sufficient to report the seasonality of groundwater recharge, but this survey can be used to identify the stable water isotopes framework in a weakly gauged basin for further hydrological and geochemical studies.

  3. Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska

    USGS Publications Warehouse

    Richters, Lindsey K.; Pope, Kevin L.

    2011-01-01

    Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.

  4. A Coaxial Dielectric Probe Technique for Distinguishing Tooth Enamel from Dental Resin

    PubMed Central

    Williams, Benjamin B.; Geimer, Shireen D.; Flood, Ann B.; Swartz, Harold M.

    2016-01-01

    For purposes of biodosimetry in the event of a large scale radiation disaster, one major and very promising point-of contact device is assessing dose using tooth enamel. This technique utilizes the capabilities of electron paramagnetic resonance to measure free radicals and other unpaired electron species, and the fact that the deposition of energy from ionizing radiation produces free radicals in most materials. An important stipulation for this strategy is that the measurements, need to be performed on a central incisor that is basically intact, i.e. which has an area of enamel surface that is as large as the probing tip of the resonator that is without decay or restorative care that replaces the enamel. Therefore, an important consideration is how to quickly assess whether the tooth has sufficient enamel to be measured for dose and whether there is resin present on the tooth being measured and to be able to characterize the amount of surface that is impacted. While there is a relatively small commercially available dielectric probe which could be used in this context, it has several disadvantages for the intended use. Therefore, a smaller, 1.19mm diameter 50 ohm, open-ended, coaxial dielectric probe has been developed as an alternative. The performance of the custom probe was validated against measurement results of known standards. Measurements were taken of multiple teeth enamel and dental resin samples using both probes. While the probe contact with the teeth samples was imperfect and added to measurement variability, the inherent dielectric contrast between the enamel and resin was sufficient that the probe measurements could be used as a robust means of distinguishing the two material types. The smaller diameter probe produced markedly more definitive results in terms of distinguishing the two materials. PMID:27182531

  5. Tracking the distribution of "ecstasy" tablets by Raman composition profiling: a large scale feasibility study.

    PubMed

    Bell, Steven E J; Barrett, Lindsay J; Burns, D Thorburn; Dennis, Andrew C; Speers, S James

    2003-11-01

    Here we report the results of the largest study yet carried out on composition profiling of seized "ecstasy" tablets by Raman spectroscopy. Approximately 1500 tablets from different seizures in N. Ireland were analysed and even though practically all the tablets contained MDMA as active constituent, there were very significant differences in their Raman spectra, which were due to variations in both the nature and concentration of the excipients used and/or the degree of hydration of the MDMA. The ratios of the peak heights of the prominent drug bands at 810 cm(-1) and 716 cm(-1) (which vary with hydration state of the drug), and the drug band at 810 cm(-1) against the largest clearly discernible excipient band in the spectrum were measured for all the samples. It was found that there was sufficient variation in composition in the general sample population to make any matches between batches of tablets taken from different seizures significant, rather than the result of random chance. Despite the large number of different batches of tablets examined in this study, only two examples of indistinguishable sets of tablets were found and in only one of these had the two batches of tablets been seized at different times. Finally, the fact that there are many examples of batches of tablets (particularly in different batches taken from single seizures) in which the differences between each set are sufficiently small that they appear to arise only from random variations within a standard manufacturing method implies that, with more extensive data, it may be possible to recognize the "signature" of tablets prepared by major manufacturers.

  6. Running Out of Time: Why Elephants Don't Gallop

    NASA Astrophysics Data System (ADS)

    Noble, Julian V.

    2001-11-01

    The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.

  7. An optimal sampling approach to modelling whole-body vibration exposure in all-terrain vehicle driving.

    PubMed

    Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao

    2017-08-01

    Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.

  8. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  9. "Are Thai children and youth sufficiently active? prevalence and correlates of physical activity from a nationally representative cross-sectional study".

    PubMed

    Amornsriwatanakul, Areekul; Lester, Leanne; Bull, Fiona C; Rosenberg, Michael

    2017-05-30

    Children and youth gain multiple health benefits from regular participation in physical activity (PA). However, in Thailand there is limited national data on children and youth's PA behaviors and recent reports suggest that Thai children and youth have low levels of PA. Furthermore, there is almost no data on the factors associated with inactivity to support the development of a Thai National PA Plan. The purpose of this paper is to investigate Thai children and youth's participation in PA and its correlates across sociodemographic characteristics and different PA domains. This study applied a cross-sectional study design with a multi-stage stratified cluster sampling. A national representative sample of 13,255 children and youth aged 6-17 years were used for data analysis. A previously validated questionnaire was used to assess PA prevalence. Logistic regression was conducted to examine the relationships of socio-demographic factors, and participation in different PA domains with overall PA. Only 23.4% of Thai children and youth met recommended levels of PA and there were large gender and regional differences. PA levels generally declined with age, although the level observed in the 10-13 year group was slightly higher than other year groups. A majority of children and youth engaged in a large number of different activities across PA domains. Sex, age, BMI, geographical regions, organized sports, participation in sport and recreational activities were significant predictors of meeting the global PA guidelines, whereas participation in physical education, active transport, and the number of screen time activities had no association. Girls were less likely to achieve sufficient PA levels (OR = 0.49, 95%CI; 0.45-0.54, p < 0.001), as were obese children (OR = 0.78, 95%CI; 0.64-0.94, p = 0.01), children living in the West (OR = 0.47, 95%CI; 0.38-0.59, p < 0.001), and those who did no participation in organized sports and sport/exercise activities, or minimal participation (1-2 activities) in recreational activities (OR = 0.79, 95%CI; 0.68-0.90, p < 0.001). The prevalence estimate of meeting the recommended guideline of sufficient PA in Thai children and youth is low, despite the high levels of engagement in a large number of PA. The results indicate that policy and interventions aimed at increasing PA are needed with special attention required to address specific groups less likely to meet the PA guideline. Strategies to promote a large volume of participation in all possible types of PA as part of Thai children and youth's daily life should be considered.

  10. Comparison of Submental Blood Collection with the Retroorbital and Submandibular Methods in Mice (Mus musculus)

    PubMed Central

    Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L

    2016-01-01

    Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712

  11. The impact of the Sarbanes Oxley Act on auditing fees: An empirical study of the oil and gas industry

    NASA Astrophysics Data System (ADS)

    Ezelle, Ralph Wayne, Jr.

    2011-12-01

    This study examines auditing of energy firms prior and post Sarbanes Oxley Act of 2002. The research explores factors impacting the asset adjusted audit fee of oil and gas companies and specifically examines the effect of the Sarbanes Oxley Act. This research analyzes multiple year audit fees of the firms engaged in the oil and gas industry. Pooled samples were created to improve statistical power with sample sizes sufficient to test for medium and large effect size. The Sarbanes Oxley Act significantly increases a firm's asset adjusted audit fees. Additional findings are that part of the variance in audit fees was attributable to the market value of the enterprise, the number of subsidiaries, the receivables and inventory, debt ratio, non-profitability, and receipt of a going concern report.

  12. Materials identification using a small-scale pixellated x-ray diffraction system

    NASA Astrophysics Data System (ADS)

    O'Flynn, D.; Crews, C.; Drakos, I.; Christodoulou, C.; Wilson, M. D.; Veale, M. C.; Seller, P.; Speller, R. D.

    2016-05-01

    A transmission x-ray diffraction system has been developed using a pixellated, energy-resolving detector (HEXITEC) and a small-scale, mains operated x-ray source (Amptek Mini-X). HEXITEC enables diffraction to be measured without the requirement of incident spectrum filtration, or collimation of the scatter from the sample, preserving a large proportion of the useful signal compared with other diffraction techniques. Due to this efficiency, sufficient molecular information for material identification can be obtained within 5 s despite the relatively low x-ray source power. Diffraction data are presented from caffeine, hexamine, paracetamol, plastic explosives and narcotics. The capability to determine molecular information from aspirin tablets inside their packaging is demonstrated. Material selectivity and the potential for a sample classification model is shown with principal component analysis, through which each different material can be clearly resolved.

  13. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.

    PubMed

    Harrar, Solomon W; Kong, Xiaoli

    2015-03-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.

  14. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    PubMed Central

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  15. Estimating Divergence Parameters With Small Samples From a Large Number of Loci

    PubMed Central

    Wang, Yong; Hey, Jody

    2010-01-01

    Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765

  16. Atomic engineering of spin valves using Ag as a surfactant

    NASA Astrophysics Data System (ADS)

    Yang, David X.; Shashishekar, B.; Chopra, Harsh Deep; Chen, P. J.; Egelhoff, W. F.

    2001-06-01

    In this study, dc magnetron sputtered NiO (50 nm)/Co (2.5 nm)/Cu(1.5 nm)/Co (3.0 nm) bottom spin valves were studied with and without Ag as a surfactant. At Cu spacer thickness of 1.5 nm, a strong positive coupling >13.92 kA/m (>175 Oe) between NiO-pinned and "free" Co layers leads to a negligible giant magnetoresistance (GMR) effect (<0.7%) in Ag-free samples. In contrast, spin valves deposited in the presence of ≈1 monolayer of surfactant Ag have sufficiently reduced coupling, 5.65 kA/m (71 Oe), which results in an order of magnitude increase in GMR (8.5%). Using transmission electron microscopy (TEM), the large positive coupling in Ag-free samples could directly be attributed to the presence of numerous pinholes. In situ x-ray photoelectron spectroscopy shows that, in Ag-containing samples, the large mobile Ag atoms float out to the surface during successive growth of Co and Cu layers. Detailed TEM studies show that surfactant Ag leaves behind smoother interfaces less prone to pinholes. The use of surfactants also illustrates their efficacy in favorably altering the magnetic characteristics of GMR spin valves, and their potential use in other magnetoelectronics devices and multilayer systems.

  17. A Large Scale (N=400) Investigation of Gray Matter Differences in Schizophrenia Using Optimized Voxel-based Morphometry

    PubMed Central

    Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.

    2008-01-01

    Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428

  18. Bacteriophage Tail-Tube Assembly Studied by Proton-Detected 4D Solid-State NMR

    DOE PAGES

    Zinke, Maximilian; Fricke, Pascal; Samson, Camille; ...

    2017-07-07

    Obtaining unambiguous resonance assignments remains a major bottleneck in solid-state NMR studies of protein structure and dynamics. Particularly for supramolecular assemblies with large subunits (>150 residues), the analysis of crowded spectral data presents a challenge, even if three-dimensional (3D) spectra are used. Here, we present a proton-detected 4D solid-state NMR assignment procedure that is tailored for large assemblies. The key to recording 4D spectra with three indirect carbon or nitrogen dimensions with their inherently large chemical shift dispersion lies in the use of sparse non-uniform sampling (as low as 2 %). As a proof of principle, we acquired 4D (H)COCANH,more » (H)CACONH, and (H)CBCANH spectra of the 20 kDa bacteriophage tail-tube protein gp17.1 in a total time of two and a half weeks. These spectra were sufficient to obtain complete resonance assignments in a straightforward manner without use of previous solution NMR data.« less

  19. Bacteriophage Tail-Tube Assembly Studied by Proton-Detected 4D Solid-State NMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zinke, Maximilian; Fricke, Pascal; Samson, Camille

    Obtaining unambiguous resonance assignments remains a major bottleneck in solid-state NMR studies of protein structure and dynamics. Particularly for supramolecular assemblies with large subunits (>150 residues), the analysis of crowded spectral data presents a challenge, even if three-dimensional (3D) spectra are used. Here, we present a proton-detected 4D solid-state NMR assignment procedure that is tailored for large assemblies. The key to recording 4D spectra with three indirect carbon or nitrogen dimensions with their inherently large chemical shift dispersion lies in the use of sparse non-uniform sampling (as low as 2 %). As a proof of principle, we acquired 4D (H)COCANH,more » (H)CACONH, and (H)CBCANH spectra of the 20 kDa bacteriophage tail-tube protein gp17.1 in a total time of two and a half weeks. These spectra were sufficient to obtain complete resonance assignments in a straightforward manner without use of previous solution NMR data.« less

  20. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  1. The genetic basis of gout.

    PubMed

    Merriman, Tony R; Choi, Hyon K; Dalbeth, Nicola

    2014-05-01

    Gout results from deposition of monosodium urate (MSU) crystals. Elevated serum urate concentrations (hyperuricemia) are not sufficient for the development of disease. Genome-wide association studies (GWAS) have identified 28 loci controlling serum urate levels. The largest genetic effects are seen in genes involved in the renal excretion of uric acid, with others being involved in glycolysis. Whereas much is understood about the genetic control of serum urate levels, little is known about the genetic control of inflammatory responses to MSU crystals. Extending knowledge in this area depends on recruitment of large, clinically ascertained gout sample sets suitable for GWAS. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Liquid scintillator tiles for calorimetry

    DOE PAGES

    Amouzegar, M.; Belloni, A.; Bilki, B.; ...

    2016-11-28

    Future experiments in high energy and nuclear physics may require large, inexpensive calorimeters that can continue to operate after receiving doses of 50 Mrad or more. Also, the light output of liquid scintillators suffers little degradation under irradiation. However, many challenges exist before liquids can be used in sampling calorimetry, especially regarding developing a packaging that has sufficient efficiency and uniformity of light collection, as well as suitable mechanical properties. We present the results of a study of a scintillator tile based on the EJ-309 liquid scintillator using cosmic rays and test beam on the light collection efficiency and uniformity,more » and some preliminary results on radiation hardness.« less

  3. Liquid scintillator tiles for calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amouzegar, M.; Belloni, A.; Bilki, B.

    Future experiments in high energy and nuclear physics may require large, inexpensive calorimeters that can continue to operate after receiving doses of 50 Mrad or more. Also, the light output of liquid scintillators suffers little degradation under irradiation. However, many challenges exist before liquids can be used in sampling calorimetry, especially regarding developing a packaging that has sufficient efficiency and uniformity of light collection, as well as suitable mechanical properties. We present the results of a study of a scintillator tile based on the EJ-309 liquid scintillator using cosmic rays and test beam on the light collection efficiency and uniformity,more » and some preliminary results on radiation hardness.« less

  4. Cautionary Notes on Cosmogenic W-182 and Other Nuclei in Lunar Samples

    NASA Technical Reports Server (NTRS)

    Yin, Qingzhu; Jacobsen, Stein B.; Wasserburg, G. J.

    2003-01-01

    Leya et al. (2000) showed that neutron capture on Ta-181 results in a production rate of Ta-182 (decays with a half-life of 114 days to W-182) sufficiently high to cause significant shifts in W-182 abundances considering the neutron fluences due to the cosmic ray cascade that were known to occur near the lunar surface. Leya et al. concluded that this cosmogenic production of W-182 may explain the large positive epsilon(sub W-182) values that Lee et al. (1997) had reported in some lunar samples rather than being produced from decay of now extinct Hf-182 (bar tau = 13 x 10(exp 6) yr). If the large range in epsilon(sub W-182) of lunar samples (0 to +11 in whole rock samples) was due to decay of now extinct Hf-182, it would require a very early time of formation and differentiation of the lunar crust-mantle system (with high Hf/W ratios) during the earliest stages of Earth s accretion. This result was both surprising and difficult to understand. The ability to explain these results by a more plausible mechanism is therefore very attractive. In a recent report Lee et al. (2002) showed that there were excesses of W-182 and that epsilon(sub W-182) was correlated with the Ta/W ratios in the mineral phases of individual lunar rock samples. This is in accord with W-182 variations in lunar samples being produced by cosmic-ray induced neutron capture on Ta-182.

  5. Nitrate and Nitrite Determination in Gunshot Residue Samples by Capillary Electrophoresis in Acidic Run Buffer.

    PubMed

    Erol, Özge Ö; Erdoğan, Behice Y; Onar, Atiye N

    2017-03-01

    Simultaneous determination of nitrate and nitrite in gunshot residue has been conducted by capillary electrophoresis using an acidic run buffer (pH 3.5). In previously developed capillary electrophoretic methods, alkaline pH separation buffers were used where nitrite and nitrate possess similar electrophoretic mobility. In this study, the electroosmotic flow has been reversed by using low pH running buffer without any additives. As a result of reversing the electroosmotic flow, very fast analysis has been actualized, well-defined and separated ion peaks emerge in less than 4 min. Besides, the limit of detection was improved by employing large volume sample stacking. Limit of detection values were 6.7 and 4.3 μM for nitrate and nitrite, respectively. In traditional procedure, mechanical agitation is employed for extraction, while in this work the extraction efficiency of ultrasound mixing for 30 min was found sufficient. The proposed method was successfully applied to authentic gunshot residue samples. © 2016 American Academy of Forensic Sciences.

  6. Satellite orbit and data sampling requirements

    NASA Technical Reports Server (NTRS)

    Rossow, William

    1993-01-01

    Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.

  7. Method and apparatus for nitrogen oxide determination

    DOEpatents

    Hohorst, Frederick A.

    1990-01-01

    Method and apparatus for determining nitrogen oxide content in a high temperature process gas, which involves withdrawing a sample portion of a high temperature gas containing nitrogen oxide from a source to be analyzed. The sample portion is passed through a restrictive flow conduit, which may be a capillary or a restriction orifice. The restrictive flow conduit is heated to a temperature sufficient to maintain the flowing sample portion at an elevated temperature at least as great as the temperature of the high temperature gas source, to thereby provide that deposition of ammonium nitrate within the restrictive flow conduit cannot occur. The sample portion is then drawn into an aspirator device. A heated motive gas is passed to the aspirator device at a temperature at least as great as the temperature of the high temperature gas source. The motive gas is passed through the nozzle of the aspirator device under conditions sufficient to aspirate the heated sample portion through the restrictive flow conduit and produce a mixture of the sample portion in the motive gas at a dilution of the sample portion sufficient to provide that deposition of ammonium nitrate from the mixture cannot occur at reduced temperature. A portion of the cooled dilute mixture is then passed to analytical means capable of detecting nitric oxide.

  8. Data-driven confounder selection via Markov and Bayesian networks.

    PubMed

    Häggström, Jenny

    2018-06-01

    To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.

  9. Planning multi-arm screening studies within the context of a drug development program

    PubMed Central

    Wason, James M S; Jaki, Thomas; Stallard, Nigel

    2013-01-01

    Screening trials are small trials used to decide whether an intervention is sufficiently promising to warrant a large confirmatory trial. Previous literature examined the situation where treatments are tested sequentially until one is considered sufficiently promising to take forward to a confirmatory trial. An important consideration for sponsors of clinical trials is how screening trials should be planned to maximize the efficiency of the drug development process. It has been found previously that small screening trials are generally the most efficient. In this paper we consider the design of screening trials in which multiple new treatments are tested simultaneously. We derive analytic formulae for the expected number of patients until a successful treatment is found, and propose methodology to search for the optimal number of treatments, and optimal sample size per treatment. We compare designs in which only the best treatment proceeds to a confirmatory trial and designs in which multiple treatments may proceed to a multi-arm confirmatory trial. We find that inclusion of a large number of treatments in the screening trial is optimal when only one treatment can proceed, and a smaller number of treatments is optimal when more than one can proceed. The designs we investigate are compared on a real-life set of screening designs. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23529936

  10. SU-F-J-193: Efficient Dose Extinction Method for Water Equivalent Path Length (WEPL) of Real Tissue Samples for Validation of CT HU to Stopping Power Conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Baer, E; Jee, K

    Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiatemore » the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.« less

  11. Hydrology and Water Quality near Bromide Pavilion in Chickasaw National Recreation Area, Murray County, Oklahoma, 2000

    USGS Publications Warehouse

    Andrews, William J.; Burrough, Steven P.

    2002-01-01

    The Bromide Pavilion in Chickasaw National Recreation Area drew many thousands of people annually to drink the mineral-rich waters piped from nearby Bromide and Medicine Springs. Periodic detection of fecal coliform bacteria in water piped to the pavilion from the springs, low yields of the springs, or flooding by adjacent Rock Creek prompted National Park Service officials to discontinue piping of the springs to the pavilion in the 1970s. Park officials would like to resume piping mineralized spring water to the pavilion to restore it as a visitor attraction, but they are concerned about the ability of the springs to provide sufficient quantities of potable water. Pumping and sampling of Bromide and Medicine Springs and Rock Creek six times during 2000 indicate that these springs may not provide sufficient water for Bromide Pavilion to supply large numbers of visitors. A potential problem with piping water from Medicine Spring is the presence of an undercut, overhanging cliff composed of conglomerate, which may collapse. Evidence of intermittent inundation of the springs by Rock Creek and seepage of surface water into the spring vaults from the adjoining creek pose a threat of contamination of the springs. Escherichia coli, fecal coliform, and fecal streptococcal bacteria were detected in some samples from the springs, indicating possible fecal contamination. Cysts of Giardia lamblia and oocysts of Cryptosporidium parvum protozoa were not detected in the creek or the springs. Total culturable enteric viruses were detected in only one water sample taken from Rock Creek.

  12. A molecular simulation protocol to avoid sampling redundancy and discover new states.

    PubMed

    Bacci, Marco; Vitalis, Andreas; Caflisch, Amedeo

    2015-05-01

    For biomacromolecules or their assemblies, experimental knowledge is often restricted to specific states. Ambiguity pervades simulations of these complex systems because there is no prior knowledge of relevant phase space domains, and sampling recurrence is difficult to achieve. In molecular dynamics methods, ruggedness of the free energy surface exacerbates this problem by slowing down the unbiased exploration of phase space. Sampling is inefficient if dwell times in metastable states are large. We suggest a heuristic algorithm to terminate and reseed trajectories run in multiple copies in parallel. It uses a recent method to order snapshots, which provides notions of "interesting" and "unique" for individual simulations. We define criteria to guide the reseeding of runs from more "interesting" points if they sample overlapping regions of phase space. Using a pedagogical example and an α-helical peptide, the approach is demonstrated to amplify the rate of exploration of phase space and to discover metastable states not found by conventional sampling schemes. Evidence is provided that accurate kinetics and pathways can be extracted from the simulations. The method, termed PIGS for Progress Index Guided Sampling, proceeds in unsupervised fashion, is scalable, and benefits synergistically from larger numbers of replicas. Results confirm that the underlying ideas are appropriate and sufficient to enhance sampling. In molecular simulations, errors caused by not exploring relevant domains in phase space are always unquantifiable and can be arbitrarily large. Our protocol adds to the toolkit available to researchers in reducing these types of errors. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Clustering on very small scales from a large sample of confirmed quasar pairs: does quasar clustering track from Mpc to kpc scales?

    NASA Astrophysics Data System (ADS)

    Eftekharzadeh, S.; Myers, A. D.; Hennawi, J. F.; Djorgovski, S. G.; Richards, G. T.; Mahabal, A. A.; Graham, M. J.

    2017-06-01

    We present the most precise estimate to date of the clustering of quasars on very small scales, based on a sample of 47 binary quasars with magnitudes of g < 20.85 and proper transverse separations of ˜25 h-1 kpc. Our sample of binary quasars, which is about six times larger than any previous spectroscopically confirmed sample on these scales, is targeted using a kernel density estimation (KDE) technique applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is 'complete' in that all of the KDE target pairs with 17.0 ≲ R ≲ 36.2 h-1 kpc in our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We catalogue 230 candidate quasar pairs with angular separations of <8 arcsec, from which our binary quasars were identified. We determine the projected correlation function of quasars (\\bar{W}_p) in four bins of proper transverse scale over the range 17.0 ≲ R ≲ 36.2 h-1 kpc. The implied small-scale quasar clustering amplitude from the projected correlation function, integrated across our entire redshift range, is A = 24.1 ± 3.6 at ˜26.6 h-1 kpc. Our sample is the first spectroscopically confirmed sample of quasar pairs that is sufficiently large to study how quasar clustering evolves with redshift at ˜25 h-1 kpc. We find that empirical descriptions of how quasar clustering evolves with redshift at ˜25 h-1 Mpc also adequately describe the evolution of quasar clustering at ˜25 h-1 kpc.

  14. Rare Event Simulation for T-cell Activation

    NASA Astrophysics Data System (ADS)

    Lipsmeier, Florian; Baake, Ellen

    2009-02-01

    The problem of statistical recognition is considered, as it arises in immunobiology, namely, the discrimination of foreign antigens against a background of the body's own molecules. The precise mechanism of this foreign-self-distinction, though one of the major tasks of the immune system, continues to be a fundamental puzzle. Recent progress has been made by van den Berg, Rand, and Burroughs (J. Theor. Biol. 209:465-486, 2001), who modelled the probabilistic nature of the interaction between the relevant cell types, namely, T-cells and antigen-presenting cells (APCs). Here, the stochasticity is due to the random sample of antigens present on the surface of every APC, and to the random receptor type that characterises individual T-cells. It has been shown previously (van den Berg et al. in J. Theor. Biol. 209:465-486, 2001; Zint et al. in J. Math. Biol. 57:841-861, 2008) that this model, though highly idealised, is capable of reproducing important aspects of the recognition phenomenon, and of explaining them on the basis of stochastic rare events. These results were obtained with the help of a refined large deviation theorem and were thus asymptotic in nature. Simulations have, so far, been restricted to the straightforward simple sampling approach, which does not allow for sample sizes large enough to address more detailed questions. Building on the available large deviation results, we develop an importance sampling technique that allows for a convenient exploration of the relevant tail events by means of simulation. With its help, we investigate the mechanism of statistical recognition in some depth. In particular, we illustrate how a foreign antigen can stand out against the self background if it is present in sufficiently many copies, although no a priori difference between self and nonself is built into the model.

  15. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  16. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  17. Study on processing parameters of glass cutting by nanosecond 532 nm fiber laser

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Gao, Fan; Xiong, Baoxing; Zhang, Xiang; Yuan, Xiao

    2018-03-01

    The processing parameters of soda-lime glass cutting with several nanosecond 532 nm pulsed fiber laser are studied in order to obtain sufficiently large ablation rate and better processing quality. The influences of laser processing parameters on effective cutting speed and cutting quality of 1 2 mm thick soda-lime glass are studied. The experimental results show that larger laser pulse energy will lead to higher effective cutting speed and larger maximum edge collapse of the front side of the glass samples. Compared with that of 1.1 mm thick glass samples, the 2.0 mm thick glass samples is more difficult to cut. With the pulse energy of 51.2 μJ, the maximum edge collapse is more than 200 μm for the 2.0 mm thick glass samples. In order to achieve the high effective cutting speed and good cutting quality at the same time, the dual energy overlapping method is used to obtain the better cutting performance for the 2.0 mm thick glass samples, and the cutting speed of 194 mm/s and the maximum edge collapse of less than 132 μm are realized.

  18. MRSR: Rationale for a Mars Rover/Sample Return mission

    NASA Technical Reports Server (NTRS)

    Carr, Michael H.

    1992-01-01

    The Solar System Exploration Committee of the NASA Advisory Council has recommended that a Mars Rover/Sample Return mission be launched before the year 2000. The recommendation is consistent with the science objectives as outlined by the National Academy of Sciences committees on Planetary and Lunar Exploration, and Planetary Biology and Chemical Evolution. Interest has also focused on Mars Rover/Sample Return (MRSR) missions, because of their crucial role as precursors for human exploration. As a result of this consensus among the advisory groups, a study of an MRSR mission began early in 1987. The study has the following goals: (1) to assess the technical feasibility of the mission; (2) to converge on two or three options for the general architecture of the mission; (3) to determine what new technologies need to be developed in order to implement the mission; (4) to define the different options sufficiently well that preliminary cost estimates can be made; and (5) to better define the science requirements. This chapter briefly describes Mars Rover/Sample Return missions that were examined in the late 1980s. These missions generally include a large (1000 kg) rover and return of over 5 kg of sample.

  19. Comparing the Ability of Enhanced Sampling Molecular Dynamics Methods To Reproduce the Behavior of Fluorescent Labels on Proteins.

    PubMed

    Walczewska-Szewc, Katarzyna; Deplazes, Evelyne; Corry, Ben

    2015-07-14

    Adequately sampling the large number of conformations accessible to proteins and other macromolecules is one of the central challenges in molecular dynamics (MD) simulations; this activity can be difficult, even for relatively simple systems. An example where this problem arises is in the simulation of dye-labeled proteins, which are now being widely used in the design and interpretation of Förster resonance energy transfer (FRET) experiments. In this study, MD simulations are used to characterize the motion of two commonly used FRET dyes attached to an immobilized chain of polyproline. Even in this simple system, the dyes exhibit complex behavior that is a mixture of fast and slow motions. Consequently, very long MD simulations are required to sufficiently sample the entire range of dye motion. Here, we compare the ability of enhanced sampling methods to reproduce the behavior of fluorescent labels on proteins. In particular, we compared Accelerated Molecular Dynamics (AMD), metadynamics, Replica Exchange Molecular Dynamics (REMD), and High Temperature Molecular Dynamics (HTMD) to equilibrium MD simulations. We find that, in our system, all of these methods improve the sampling of the dye motion, but the most significant improvement is achieved using REMD.

  20. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  1. Complex disease and phenotype mapping in the domestic dog

    PubMed Central

    Hayward, Jessica J.; Castelhano, Marta G.; Oliveira, Kyle C.; Corey, Elizabeth; Balkman, Cheryl; Baxter, Tara L.; Casal, Margret L.; Center, Sharon A.; Fang, Meiying; Garrison, Susan J.; Kalla, Sara E.; Korniliev, Pavel; Kotlikoff, Michael I.; Moise, N. S.; Shannon, Laura M.; Simpson, Kenneth W.; Sutter, Nathan B.; Todhunter, Rory J.; Boyko, Adam R.

    2016-01-01

    The domestic dog is becoming an increasingly valuable model species in medical genetics, showing particular promise to advance our understanding of cancer and orthopaedic disease. Here we undertake the largest canine genome-wide association study to date, with a panel of over 4,200 dogs genotyped at 180,000 markers, to accelerate mapping efforts. For complex diseases, we identify loci significantly associated with hip dysplasia, elbow dysplasia, idiopathic epilepsy, lymphoma, mast cell tumour and granulomatous colitis; for morphological traits, we report three novel quantitative trait loci that influence body size and one that influences fur length and shedding. Using simulation studies, we show that modestly larger sample sizes and denser marker sets will be sufficient to identify most moderate- to large-effect complex disease loci. This proposed design will enable efficient mapping of canine complex diseases, most of which have human homologues, using far fewer samples than required in human studies. PMID:26795439

  2. X-Ray diffraction on large single crystals using a powder diffractometer

    DOE PAGES

    Jesche, A.; Fix, M.; Kreyssig, A.; ...

    2016-06-16

    Information on the lattice parameter of single crystals with known crystallographic structure allows for estimations of sample quality and composition. In many cases it is sufficient to determine one lattice parameter or the lattice spacing along a certain, high- symmetry direction, e.g. in order to determine the composition in a substitution series by taking advantage of Vegard’s rule. Here we present a guide to accurate measurements of single crystals with dimensions ranging from 200 μm up to several millimeter using a standard powder diffractometer in Bragg-Brentano geometry. The correction of the error introduced by the sample height and the optimizationmore » of the alignment are discussed in detail. Finally, in particular for single crystals with a plate-like habit, the described procedure allows for measurement of the lattice spacings normal to the plates with high accuracy on a timescale of minutes.« less

  3. Parameter recovery, bias and standard errors in the linear ballistic accumulator model.

    PubMed

    Visser, Ingmar; Poessé, Rens

    2017-05-01

    The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.

  4. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-06

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  5. Intercomparison of fog water samplers

    NASA Astrophysics Data System (ADS)

    Schell, Dieter; Georgii, Hans-Walter; Maser, Rolf; Jaeschke, Wolfgang; Arends, Beate G.; Kos, Gerard P. A.; Winkler, Peter; Schneider, Thomas; Berner, Axel; Kruisz, Christian

    1992-11-01

    During the Po Valley Fog Experiment 1989, two fogwater collectors were operated simultaneously at the ground and the results were compared to each other. The chemical analyses of the samples as well as the collection efficiencies showed remarkable differences between both collectors. Some differences in the solute concentrations in the samples of both collectors could be expected due to small differences in the 50-percent cut-off diameters. The large differences in the collection efficiencies however cannot be explained by these small variations of d sub 50, because normally only a small fraction of the water mass is concentrated in the size range of 5-7-micron droplets. It is shown that it is not sufficient to characterize a fogwater collector only by its cut-off diameter. The results of several wind tunnel calibration tests show that the collection efficiencies of the fogwater collectors are a function of windspeed and shape of the droplet spectra.

  6. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  7. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  8. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  9. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  10. Oblique nonlinear whistler wave

    NASA Astrophysics Data System (ADS)

    Yoon, Peter H.; Pandey, Vinay S.; Lee, Dong-Hun

    2014-03-01

    Motivated by satellite observation of large-amplitude whistler waves propagating in oblique directions with respect to the ambient magnetic field, a recent letter discusses the physics of large-amplitude whistler waves and relativistic electron acceleration. One of the conclusions of that letter is that oblique whistler waves will eventually undergo nonlinear steepening regardless of the amplitude. The present paper reexamines this claim and finds that the steepening associated with the density perturbation almost never occurs, unless whistler waves have sufficiently high amplitude and propagate sufficiently close to the resonance cone angle.

  11. High resolution anatomical and quantitative MRI of the entire human occipital lobe ex vivo at 9.4T.

    PubMed

    Sengupta, S; Fritz, F J; Harms, R L; Hildebrand, S; Tse, D H Y; Poser, B A; Goebel, R; Roebroeck, A

    2018-03-01

    Several magnetic resonance imaging (MRI) contrasts are sensitive to myelin content in gray matter in vivo which has ignited ambitions of MRI-based in vivo cortical histology. Ultra-high field (UHF) MRI, at fields of 7T and beyond, is crucial to provide the resolution and contrast needed to sample contrasts over the depth of the cortex and get closer to layer resolved imaging. Ex vivo MRI of human post mortem samples is an important stepping stone to investigate MRI contrast in the cortex, validate it against histology techniques applied in situ to the same tissue, and investigate the resolutions needed to translate ex vivo findings to in vivo UHF MRI. Here, we investigate key technology to extend such UHF studies to large human brain samples while maintaining high resolution, which allows investigation of the layered architecture of several cortical areas over their entire 3D extent and their complete borders where architecture changes. A 16 channel cylindrical phased array radiofrequency (RF) receive coil was constructed to image a large post mortem occipital lobe sample (~80×80×80mm 3 ) in a wide-bore 9.4T human scanner with the aim of achieving high-resolution anatomical and quantitative MR images. Compared with a human head coil at 9.4T, the maximum Signal-to-Noise ratio (SNR) was increased by a factor of about five in the peripheral cortex. Although the transmit profile with a circularly polarized transmit mode at 9.4T is relatively inhomogeneous over the large sample, this challenge was successfully resolved with parallel transmit using the kT-points method. Using this setup, we achieved 60μm anatomical images for the entire occipital lobe showing increased spatial definition of cortical details compared to lower resolutions. In addition, we were able to achieve sufficient control over SNR, B 0 and B 1 homogeneity and multi-contrast sampling to perform quantitative T 2 * mapping over the same volume at 200μm. Markov Chain Monte Carlo sampling provided maximum posterior estimates of quantitative T 2 * and their uncertainty, allowing delineation of the stria of Gennari over the entire length and width of the calcarine sulcus. We discuss how custom RF receive coil arrays built to specific large post mortem sample sizes can provide a platform for UHF cortical layer-specific quantitative MRI over large fields of view. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Searching mixed DNA profiles directly against profile databases.

    PubMed

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Generation Scotland: Donor DNA Databank; A control DNA resource.

    PubMed

    Kerr, Shona M; Liewald, David C M; Campbell, Archie; Taylor, Kerrie; Wild, Sarah H; Newby, David; Turner, Marc; Porteous, David J

    2010-11-23

    Many medical disorders of public health importance are complex diseases caused by multiple genetic, environmental and lifestyle factors. Recent technological advances have made it possible to analyse the genetic variants that predispose to complex diseases. Reliable detection of these variants requires genome-wide association studies in sufficiently large numbers of cases and controls. This approach is often hampered by difficulties in collecting appropriate control samples. The Generation Scotland: Donor DNA Databank (GS:3D) aims to help solve this problem by providing a resource of control DNA and plasma samples accessible for research. GS:3D participants were recruited from volunteer blood donors attending Scottish National Blood Transfusion Service (SNBTS) clinics across Scotland. All participants gave full written consent for GS:3D to take spare blood from their normal donation. Participants also supplied demographic data by completing a short questionnaire. Over five thousand complete sets of samples, data and consent forms were collected. DNA and plasma were extracted and stored. The data and samples were unlinked from their original SNBTS identifier number. The plasma, DNA and demographic data are available for research. New data obtained from analysis of the resource will be fed back to GS:3D and will be made available to other researchers as appropriate. Recruitment of blood donors is an efficient and cost-effective way of collecting thousands of control samples. Because the collection is large, subsets of controls can be selected, based on age range, gender, and ethnic or geographic origin. The GS:3D resource should reduce time and expense for investigators who would otherwise have had to recruit their own controls.

  14. High-frequency signal and noise estimates of CSR GRACE RL04

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.

    2012-12-01

    A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.

  15. Correction of Population Stratification in Large Multi-Ethnic Association Studies

    PubMed Central

    Serre, David; Montpetit, Alexandre; Paré, Guillaume; Engert, James C.; Yusuf, Salim; Keavney, Bernard; Hudson, Thomas J.; Anand, Sonia

    2008-01-01

    Background The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification. Methodology/Principal Findings We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification. Conclusions/Significance Our results highlight the importance of carefully addressing population stratification and of carefully “cleaning” the sample prior to analyses to obtain stronger signals of association and to avoid spurious results. PMID:18196181

  16. Adding the missing piece: Spitzer imaging of the HSC-Deep/PFS fields

    NASA Astrophysics Data System (ADS)

    Sajina, Anna; Bezanson, Rachel; Capak, Peter; Egami, Eiichi; Fan, Xiaohui; Farrah, Duncan; Greene, Jenny; Goulding, Andy; Lacy, Mark; Lin, Yen-Ting; Liu, Xin; Marchesini, Danilo; Moutard, Thibaud; Ono, Yoshiaki; Ouchi, Masami; Sawicki, Marcin; Strauss, Michael; Surace, Jason; Whitaker, Katherine

    2018-05-01

    We propose to observe a total of 7sq.deg. to complete the Spitzer-IRAC coverage of the HSC-Deep survey fields. These fields are the sites of the PrimeFocusSpectrograph (PFS) galaxy evolution survey which will provide spectra of wide wavelength range and resolution for almost all M* galaxies at z 0.7-1.7, and extend out to z 7 for targeted samples. Our fields already have deep broadband and narrowband photometry in 12 bands spanning from u through K and a wealth of other ancillary data. We propose completing the matching depth IRAC observations in the extended COSMOS, ELAIS-N1 and Deep2-3 fields. By complementing existing Spitzer coverage, this program will lead to an unprecedended in spectro-photometric coverage dataset across a total of 15 sq.deg. This dataset will have significant legacy value as it samples a large enough cosmic volume to be representative of the full range of environments, but also doing so with sufficient information content per galaxy to confidently derive stellar population characteristics. This enables detailed studies of the growth and quenching of galaxies and their supermassive black holes in the context of a galaxy's local and large scale environment.

  17. Free energy barriers to evaporation of water in hydrophobic confinement.

    PubMed

    Sharma, Sumit; Debenedetti, Pablo G

    2012-11-08

    We use umbrella sampling Monte Carlo and forward and reverse forward flux sampling (FFS) simulation techniques to compute the free energy barriers to evaporation of water confined between two hydrophobic surfaces separated by nanoscopic gaps, as a function of the gap width, at 1 bar and 298 K. The evaporation mechanism for small (1 × 1 nm(2)) surfaces is found to be fundamentally different from that for large (3 × 3 nm(2)) surfaces. In the latter case, the evaporation proceeds via the formation of a gap-spanning tubular cavity. The 1 × 1 nm(2) surfaces, in contrast, are too small to accommodate a stable vapor cavity. Accordingly, the associated free energy barriers correspond to the formation of a critical-sized cavity for sufficiently large confining surfaces, and to complete emptying of the gap region for small confining surfaces. The free energy barriers to evaporation were found to be of O(20kT) for 14 Å gaps, and to increase by approximately ~5kT with every 1 Å increase in the gap width. The entropy contribution to the free energy of evaporation was found to be independent of the gap width.

  18. TESTING HOMOGENEITY WITH GALAXY STAR FORMATION HISTORIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, Ben; Jimenez, Raul; Tojeiro, Rita

    2013-01-01

    Observationally confirming spatial homogeneity on sufficiently large cosmological scales is of importance to test one of the underpinning assumptions of cosmology, and is also imperative for correctly interpreting dark energy. A challenging aspect of this is that homogeneity must be probed inside our past light cone, while observations take place on the light cone. The star formation history (SFH) in the galaxy fossil record provides a novel way to do this. We calculate the SFH of stacked luminous red galaxy (LRG) spectra obtained from the Sloan Digital Sky Survey. We divide the LRG sample into 12 equal-area contiguous sky patchesmore » and 10 redshift slices (0.2 < z < 0.5), which correspond to 120 blocks of volume {approx}0.04 Gpc{sup 3}. Using the SFH in a time period that samples the history of the universe between look-back times 11.5 and 13.4 Gyr as a proxy for homogeneity, we calculate the posterior distribution for the excess large-scale variance due to inhomogeneity, and find that the most likely solution is no extra variance at all. At 95% credibility, there is no evidence of deviations larger than 5.8%.« less

  19. Attack Detection in Sensor Network Target Localization Systems With Quantized Data

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangfan; Wang, Xiaodong; Blum, Rick S.; Kaplan, Lance M.

    2018-04-01

    We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.

  20. A comparison of cosmological models using time delay lenses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio, E-mail: jjwei@pmo.ac.cn, E-mail: xfwu@pmo.ac.cn, E-mail: fmelia@email.arizona.edu

    2014-06-20

    The use of time-delay gravitational lenses to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 12 lens systems, which have thus far been used solely for optimizing the parameters of ΛCDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between competing models. The currently available sample indicates a likelihood of ∼70%-80% that the R {sub h} = ct universe is the correct cosmology versus ∼20%-30% formore » the standard model. This possibly interesting result reinforces the need to greatly expand the sample of time-delay lenses, e.g., with the successful implementation of the Dark Energy Survey, the VST ATLAS survey, and the Large Synoptic Survey Telescope. In anticipation of a greatly expanded catalog of time-delay lenses identified with these surveys, we have produced synthetic samples to estimate how large they would have to be in order to rule out either model at a ∼99.7% confidence level. We find that if the real cosmology is ΛCDM, a sample of ∼150 time-delay lenses would be sufficient to rule out R {sub h} = ct at this level of accuracy, while ∼1000 time-delay lenses would be required to rule out ΛCDM if the real universe is instead R {sub h} = ct. This difference in required sample size reflects the greater number of free parameters available to fit the data with ΛCDM.« less

  1. Voids and constraints on nonlinear clustering of galaxies

    NASA Technical Reports Server (NTRS)

    Vogeley, Michael S.; Geller, Margaret J.; Park, Changbom; Huchra, John P.

    1994-01-01

    Void statistics of the galaxy distribution in the Center for Astrophysics Redshift Survey provide strong constraints on galaxy clustering in the nonlinear regime, i.e., on scales R equal to or less than 10/h Mpc. Computation of high-order moments of the galaxy distribution requires a sample that (1) densely traces the large-scale structure and (2) covers sufficient volume to obtain good statistics. The CfA redshift survey densely samples structure on scales equal to or less than 10/h Mpc and has sufficient depth and angular coverage to approach a fair sample on these scales. In the nonlinear regime, the void probability function (VPF) for CfA samples exhibits apparent agreement with hierarchical scaling (such scaling implies that the N-point correlation functions for N greater than 2 depend only on pairwise products of the two-point function xi(r)) However, simulations of cosmological models show that this scaling in redshift space does not necessarily imply such scaling in real space, even in the nonlinear regime; peculiar velocities cause distortions which can yield erroneous agreement with hierarchical scaling. The underdensity probability measures the frequency of 'voids' with density rho less than 0.2 -/rho. This statistic reveals a paucity of very bright galaxies (L greater than L asterisk) in the 'voids.' Underdensities are equal to or greater than 2 sigma more frequent in bright galaxy samples than in samples that include fainter galaxies. Comparison of void statistics of CfA samples with simulations of a range of cosmological models favors models with Gaussian primordial fluctuations and Cold Dark Matter (CDM)-like initial power spectra. Biased models tend to produce voids that are too empty. We also compare these data with three specific models of the Cold Dark Matter cosmogony: an unbiased, open universe CDM model (omega = 0.4, h = 0.5) provides a good match to the VPF of the CfA samples. Biasing of the galaxy distribution in the 'standard' CDM model (omega = 1, b = 1.5; see below for definitions) and nonzero cosmological constant CDM model (omega = 0.4, h = 0.6 lambda(sub 0) = 0.6, b = 1.3) produce voids that are too empty. All three simulations match the observed VPF and underdensity probability for samples of very bright (M less than M asterisk = -19.2) galaxies, but produce voids that are too empty when compared with samples that include fainter galaxies.

  2. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples.

    PubMed

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David; Šlapeta, Jan

    2017-09-01

    Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74-0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65-88%), compared to the sensitivity (91-100%) of the new molecular diagnostic workflow. Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited.

  3. Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.

    PubMed

    Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing

    2017-01-01

    Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.

  4. Vocational students' learning preferences: the interpretability of ipsative data.

    PubMed

    Smith, P J

    2000-02-01

    A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.

  5. Monopropellant engine investigation for space shuttle reaction control system. Volume 3: Improvement of metal foam for catalyst retention

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The retention of granular catalyst in a metal foam matrix was demonstrated to greatly increase the life capability of hydrazine monopropellant reactors. Since nickel foam used in previous tests was found to become degraded after long-term exposure the cause of degradation was examined and metal foams of improved durability were developed. The most durable foam developed was a rhodium-coated nickel foam. An all-platinum foam was found to be incompatible in a hot ammonia (hydrazine) environment. It is recommended to scale up the manufacturing process for the improved foam to produce samples sufficiently large for space shuttle APU gas generator testing.

  6. Development of techniques for advanced optical contamination measurement with internal reflection spectroscopy, phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Hayes, J. D.

    1972-01-01

    The feasibility of monitoring volatile contaminants in a large space simulation chamber using techniques of internal reflection spectroscopy was demonstrated analytically and experimentally. The infrared spectral region was selected as the operational spectral range in order to provide unique identification of the contaminants along with sufficient sensitivity to detect trace contaminant concentrations. It was determined theoretically that a monolayer of the contaminants could be detected and identified using optimized experimental procedures. This ability was verified experimentally. Procedures were developed to correct the attenuated total reflectance spectra for thick sample distortion. However, by using two different element designs the need for such correction can be avoided.

  7. Development and melt growth of novel scintillating halide crystals

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Akira; Yokota, Yuui; Shoji, Yasuhiro; Kral, Robert; Kamada, Kei; Kurosawa, Shunsuke; Ohashi, Yuji; Arakawa, Mototaka; Chani, Valery I.; Kochurikhin, Vladimir V.; Yamaji, Akihiro; Andrey, Medvedev; Nikl, Martin

    2017-12-01

    Melt growth of scintillating halide crystals is reviewed. The vertical Bridgman growth technique is still considered as very popular method that enables production of relatively large and commercially attractive crystals. On the other hand, the micro-pulling-down method is preferable when fabrication of small samples, sufficient for preliminary characterization of their optical and/or scintillation performance, is required. Moreover, bulk crystal growth is also available using the micro-pulling-down furnace. The examples of growths of various halide crystals by industrially friendly melt growth techniques including Czochralski and edge-defined film-fed growth methods are also discussed. Finally, traveling molten zone growth that in some degree corresponds to horizontal zone melting is briefly overviewed.

  8. Assessing sufficiency of thermal riverscapes for resilient ...

    EPA Pesticide Factsheets

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  9. Mapping the human atria with optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lye, Theresa H.; Gan, Yu; Hendon, Christine P.

    2017-02-01

    Atrial structure plays an important role in the mechanisms of atrial disease. However, detailed imaging of human atria remains limited due to many imaging modalities lacking sufficient resolution. We propose the use of optical coherence tomography (OCT), which has micrometer resolution and millimeter-scale imaging depth well-suited for the atria, combined with image stitching algorithms, to develop large, detailed atria image maps. Human atria samples (n = 7) were obtained under approved protocols from the National Disease Research Interchange (NDRI). One right atria sample was imaged using an ultrahigh-resolution spectral domain OCT system, with 5.52 and 2.72 μm lateral and axial resolution in air, respectively, and 1.78 mm imaging depth. Six left atria and five pulmonary vein samples were imaged using the spectral domain OCT system, Telesto I (Thorlabs GmbH, Germany) with 15 and 6.5 μm lateral and axial resolution in air, respectively, and 2.51 mm imaging depth. Overlapping image volumes were obtained from areas of the human left and right atria and the pulmonary veins. Regions of collagen, adipose, and myocardium could be identified within the OCT images. Image stitching was applied to generate fields of view with side dimensions up to about 3 cm. This study established steps towards mapping large regions of the human atria and pulmonary veins in high resolution using OCT.

  10. Identification of probabilities.

    PubMed

    Vitányi, Paul M B; Chater, Nick

    2017-02-01

    Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.

  11. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.

    PubMed

    Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  12. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging

    NASA Astrophysics Data System (ADS)

    Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  13. Electrohydrodynamically driven large-area liquid ion sources

    DOEpatents

    Pregenzer, Arian L.

    1988-01-01

    A large-area liquid ion source comprises means for generating, over a large area of the surface of a liquid, an electric field of a strength sufficient to induce emission of ions from a large area of said liquid. Large areas in this context are those distinct from emitting areas in unidimensional emitters.

  14. General administrative rulings and decisions; amendment to the examination and investigation sample requirements; companion document to direct final rule--FDA. Proposed rule.

    PubMed

    1998-09-25

    The Food and Drug Administration (FDA) is proposing to amend its regulations regarding the collection of twice the quantity of food, drug, or cosmetic estimated to be sufficient for analysis. This action increases the dollar amount that FDA will consider to determine whether to routinely collect a reserve sample of a food, drug, or cosmetic product in addition to the quantity sufficient for analysis. Experience has demonstrated that the current dollar amount does not adequately cover the cost of most quantities sufficient for analysis plus reserve samples. This proposed rule is a companion to the direct final rule published elsewhere in this issue of the Federal Register. This action is part of FDA's continuing effort to achieve the objectives of the President's "Reinventing Government" initiative, and it is intended to reduce the burden of unnecessary regulations on food, drugs, and cosmetics without diminishing the protection of the public health.

  15. Evaluating the sufficiency of protected lands for maintaining wildlife population connectivity in the northern Rocky Mountains

    Treesearch

    Samuel A. Cushman; Erin L. Landguth; Curtis H. Flather

    2012-01-01

    Aim: The goal of this study was to evaluate the sufficiency of the network of protected lands in the U.S. northern Rocky Mountains in providing protection for habitat connectivity for 105 hypothetical organisms. A large proportion of the landscape...

  16. 17 CFR 36.2 - Exempt boards of trade.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...

  17. 17 CFR 36.2 - Exempt boards of trade.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...

  18. 17 CFR 36.2 - Exempt boards of trade.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...

  19. 17 CFR 36.2 - Exempt boards of trade.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...

  20. 17 CFR 36.2 - Exempt boards of trade.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...

  1. Relative sampling efficiency and movements of subadult Lake Sturgeon in the Lower Wolf River, Wisconsin

    USGS Publications Warehouse

    Snobl, Zachary R.; Isermann, Daniel A.; Koenigs, Ryan P.; Raabe, Joshua K.

    2017-01-01

    Understanding sampling efficiency and movements of subadult Lake Sturgeon Acipenser fulvescens is necessary to facilitate population rehabilitation and recruitment monitoring in large systems with extensive riverine and lacustrine habitats. We used a variety of sampling methods to capture subadult Lake Sturgeon (i.e., fish between 75 and 130 cm TL that had not reached sexual maturity) and monitored their movements using radio telemetry in the lower Wolf River, a tributary to the Lake Winnebago system in Wisconsin. Our objectives were to determine whether (1) capture efficiency (expressed in terms of sampling time) of subadult Lake Sturgeon using multiple sampling methods was sufficient to justify within-river sampling as part of a basin-wide recruitment survey targeting subadults, (2) linear home ranges varied in relation to season or sex, and (3) subadult Lake Sturgeon remained in the lower Wolf River. From 2013 to 2014, 628 h of combined sampling effort that included gill nets, trotlines, electrofishing, and scuba capture was required to collect 18 subadult sturgeon, which were then implanted with radio transmitters and tracked by boat and plane. Linear home ranges did not differ in relation to sex but did vary among seasons, and the majority of movement occurred in spring. Seven of the 18 (39%) Lake Sturgeon left the river and were not detected in the river again during the study. Between 56% and 70% of subadult fish remaining in the river made definitive movements to, or near, known spawning locations when adult Lake Sturgeon were actively spawning. Our results suggest only a small proportion of subadult Lake Sturgeon in the Lake Winnebago population use the lower Wolf River, indicating that riverine sampling may not always be warranted when targeting subadults in large lake–river complexes. More information is needed on distribution of subadult Lake Sturgeon to develop sampling protocols for this population segment.

  2. Design of sEMG assembly to detect external anal sphincter activity: a proof of concept.

    PubMed

    Shiraz, Arsam; Leaker, Brian; Mosse, Charles Alexander; Solomon, Eskinder; Craggs, Michael; Demosthenous, Andreas

    2017-10-31

    Conditional trans-rectal stimulation of the pudendal nerve could provide a viable solution to treat hyperreflexive bladder in spinal cord injury. A set threshold of the amplitude estimate of the external anal sphincter surface electromyography (sEMG) may be used as the trigger signal. The efficacy of such a device should be tested in a large scale clinical trial. As such, a probe should remain in situ for several hours while patients attend to their daily routine; the recording electrodes should be designed to be large enough to maintain good contact while observing design constraints. The objective of this study was to arrive at a design for intra-anal sEMG recording electrodes for the subsequent clinical trials while deriving the possible recording and processing parameters. Having in mind existing solutions and based on theoretical and anatomical considerations, a set of four multi-electrode probes were designed and developed. These were tested in a healthy subject and the measured sEMG traces were recorded and appropriately processed. It was shown that while comparatively large electrodes record sEMG traces that are not sufficiently correlated with the external anal sphincter contractions, smaller electrodes may not maintain a stable electrode tissue contact. It was shown that 3 mm wide and 1 cm long electrodes with 5 mm inter-electrode spacing, in agreement with Nyquist sampling, placed 1 cm from the orifice may intra-anally record a sEMG trace sufficiently correlated with external anal sphincter activity. The outcome of this study can be used in any biofeedback, treatment or diagnostic application where the activity of the external anal sphincter sEMG should be detected for an extended period of time.

  3. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    PubMed

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was <90% (and in many cases<50%). Consequently, if identifying as many existing infections as possible is important, then the criteria for considering a quarter positive should be a single colony (from a 0.01-mL milk sample) isolated (definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97% in the weekly data, suggesting that for most species, definition A may be acceptable. For coagulase-negative staphylococci, definitions B (2 colonies from a 0.01-mL milk sample) raised the Sp to 92 and 95% in the daily and weekly data, respectively. For "any organism," using definition B raised the Sp to 88 and 93% in the 2 data sets, respectively. The final choice of definition will depend on the objectives of study or control program for which the sample was collected. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Intensity correlation-based calibration of FRET.

    PubMed

    Bene, László; Ungvári, Tamás; Fedor, Roland; Sasi Szabó, László; Damjanovich, László

    2013-11-05

    Dual-laser flow cytometric resonance energy transfer (FCET) is a statistically efficient and accurate way of determining proximity relationships for molecules of cells even under living conditions. In the framework of this algorithm, absolute fluorescence resonance energy transfer (FRET) efficiency is determined by the simultaneous measurement of donor-quenching and sensitized emission. A crucial point is the determination of the scaling factor α responsible for balancing the different sensitivities of the donor and acceptor signal channels. The determination of α is not simple, requiring preparation of special samples that are generally different from a double-labeled FRET sample, or by the use of sophisticated statistical estimation (least-squares) procedures. We present an alternative, free-from-spectral-constants approach for the determination of α and the absolute FRET efficiency, by an extension of the presented framework of the FCET algorithm with an analysis of the second moments (variances and covariances) of the detected intensity distributions. A quadratic equation for α is formulated with the intensity fluctuations, which is proved sufficiently robust to give accurate α-values on a cell-by-cell basis in a wide system of conditions using the same double-labeled sample from which the FRET efficiency itself is determined. This seemingly new approach is illustrated by FRET measurements between epitopes of the MHCI receptor on the cell surface of two cell lines, FT and LS174T. The figures show that whereas the common way of α determination fails at large dye-per-protein labeling ratios of mAbs, this presented-as-new approach has sufficient ability to give accurate results. Although introduced in a flow cytometer, the new approach can also be straightforwardly used with fluorescence microscopes. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Chemical abundances of primary stars in the Sirius-like binary systems

    NASA Astrophysics Data System (ADS)

    Kong, X. M.; Zhao, G.; Zhao, J. K.; Shi, J. R.; Kumar, Y. Bharat; Wang, L.; Zhang, J. B.; Wang, Y.; Zhou, Y. T.

    2018-05-01

    Study of primary stars lying in Sirius-like systems with various masses of white dwarf (WD) companions and orbital separations is one of the key aspects to understand the origin and nature of barium (Ba) stars. In this paper, based on high-resolution and high-S/N spectra, we present systematic analysis of photospheric abundances for 18 FGK primary stars of Sirius-like systems including six giants and 12 dwarfs. Atmospheric parameters, stellar masses, and abundances of 24 elements (C, Na, Mg, Al, Si, S, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Sr, Y, Zr, Ba, La, Ce, and Nd) are determined homogeneously. The abundance patterns in these sample stars show that most of the elements in our sample follow the behaviour of field stars with similar metallicity. As expected, s-process elements in four known Ba giants show overabundance. A weak correlation was found between anomalies of s-process elemental abundance and orbital separation, suggesting that the orbital separation of the binaries could not be the main constraint to differentiate strong Ba stars from mild Ba stars. Our study shows that the large mass (>0.51 M⊙) of a WD companion in a binary system is not a sufficient condition to form a Ba star, even if the separation between the two components is small. Although not sufficient, it seems to be a necessary condition since Ba stars with lower mass WDs in the observed sample were not found. Our results support that [s/Fe] and [hs/ls] ratios of Ba stars are anti-correlated with the metallicity. However, the different levels of s-process overabundance among Ba stars may not be dominated mainly by the metallicity.

  6. A Straightforward and Highly Efficient Precipitation/On-pellet Digestion Procedure Coupled to a Long Gradient Nano-LC Separation and Orbitrap Mass Spectrometry for Label-free Expression Profiling of the Swine Heart Mitochondrial Proteome

    PubMed Central

    Duan, Xiaotao; Young, Rebecca; Straubinger, Robert M.; Page, Brian J.; Cao, Jin; Wang, Hao; Yu, Haoying; Canty, John M.; Qu, Jun

    2009-01-01

    For label-free expression profiling of tissue proteomes, efficient protein extraction, thorough and quantitative sample cleanup and digestion procedures, as well as sufficient and reproducible chromatographic separation, are highly desirable but remain challenging. However, optimal methodology has remained elusive, especially for proteomes that are rich in membrane proteins, such as the mitochondria. Here we describe a straightforward and reproducible sample preparation procedure, coupled with a highly selective and sensitive nano-LC/Orbitrap analysis, which enables reliable and comprehensive expression profiling of tissue mitochondria. The mitochondrial proteome of swine heart was selected as a test system. Efficient protein extraction was accomplished using a strong buffer containing both ionic and non-ionic detergents. Overnight precipitation was used for cleanup of the extract, and the sample was subjected to an optimized 2-step, on-pellet digestion approach. In the first step, the protein pellet was dissolved via a 4 h tryptic digestion under vigorous agitation, which nano-LC/LTQ/ETD showed to produce large and incompletely cleaved tryptic peptides. The mixture was then reduced, alkylated, and digested into its full complement of tryptic peptides with additional trypsin. This solvent precipitation/on-pellet digestion procedure achieved significantly higher and more reproducible peptide recovery of the mitochondrial preparation, than observed using a prevalent alternative procedure for label-free expression profiling, SDS-PAGE/in-gel digestion (87% vs. 54%). Furthermore, uneven peptide losses were lower than observed with SDS-PAGE/in-gel digestion. The resulting peptides were sufficiently resolved by a 5 h gradient using a nano-LC configuration that features a low-void-volume, high chromatographic reproducibility, and an LTQ/Orbitrap analyzer for protein identification and quantification. The developed method was employed for label-free comparison of the mitochondrial proteomes of myocardium from healthy animals vs. those with hibernating myocardium. Each experimental group consisted of a relatively large number of animals (n=10), and samples were analyzed in random order to minimize quantitative false-positives. Using this approach, 904 proteins were identified and quantified with high confidence, and those mitochondrial proteins that were altered significantly between groups were compared with the results of a parallel 2D-DIGE analysis. The sample preparation and analytical strategy developed here represents an advancement that can be adapted to analyze other tissue proteomes. PMID:19290621

  7. Line-scanning, stage scanning confocal microscope

    NASA Astrophysics Data System (ADS)

    Carucci, John A.; Stevenson, Mary; Gareau, Daniel

    2016-03-01

    We created a line-scanning, stage scanning confocal microscope as part of a new procedure: video assisted micrographic surgery (VAMS). The need for rapid pathological assessment of the tissue on the surface of skin excisions very large since there are 3.5 million new skin cancers diagnosed annually in the United States. The new design presented here is a confocal microscope without any scanning optics. Instead, a line is focused in space and the sample, which is flattened, is physically translated such that the line scans across its face in a direction perpendicular to the line its self. The line is 6mm long and the stage is capable of scanning 50 mm, hence the field of view is quite large. The theoretical diffraction-limited resolution is 0.7um lateral and 3.7um axial. However, in this preliminary report, we present initial results that are a factor of 5-7 poorer in resolution. The results are encouraging because they demonstrate that the linear array detector measures sufficient signal from fluorescently labeled tissue and also demonstrate the large field of view achievable with VAMS.

  8. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  9. Hand coverage by alcohol-based handrub varies: Volume and hand size matter.

    PubMed

    Zingg, Walter; Haidegger, Tamas; Pittet, Didier

    2016-12-01

    Visitors of an infection prevention and control conference performed hand hygiene with 1, 2, or 3 mL ultraviolet light-traced alcohol-based handrub. Coverage of palms, dorsums, and fingertips were measured by digital images. Palms of all hand sizes were sufficiently covered when 2 mL was applied, dorsums of medium and large hands were never sufficiently covered. Palmar fingertips were sufficiently covered when 2  or 3 mL was applied, and dorsal fingertips were never sufficiently covered. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  10. Evaluating the utility of hexapod species for calculating a confidence interval about a succession based postmortem interval estimate.

    PubMed

    Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D

    2014-08-01

    Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Comparison of Interferometric Time-Series Analysis Techniques with Implications for Future Mission Design

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.

    2006-12-01

    Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.

  12. Characterizing and predicting species distributions across environments and scales: Argentine ant occurrences in the eye of the beholder

    USGS Publications Warehouse

    Menke, S.B.; Holway, D.A.; Fisher, R.N.; Jetz, W.

    2009-01-01

    Aim: Species distribution models (SDMs) or, more specifically, ecological niche models (ENMs) are a useful and rapidly proliferating tool in ecology and global change biology. ENMs attempt to capture associations between a species and its environment and are often used to draw biological inferences, to predict potential occurrences in unoccupied regions and to forecast future distributions under environmental change. The accuracy of ENMs, however, hinges critically on the quality of occurrence data. ENMs often use haphazardly collected data rather than data collected across the full spectrum of existing environmental conditions. Moreover, it remains unclear how processes affecting ENM predictions operate at different spatial scales. The scale (i.e. grain size) of analysis may be dictated more by the sampling regime than by biologically meaningful processes. The aim of our study is to jointly quantify how issues relating to region and scale affect ENM predictions using an economically important and ecologically damaging invasive species, the Argentine ant (Linepithema humile). Location: California, USA. Methods: We analysed the relationship between sampling sufficiency, regional differences in environmental parameter space and cell size of analysis and resampling environmental layers using two independently collected sets of presence/absence data. Differences in variable importance were determined using model averaging and logistic regression. Model accuracy was measured with area under the curve (AUC) and Cohen's kappa. Results: We first demonstrate that insufficient sampling of environmental parameter space can cause large errors in predicted distributions and biological interpretation. Models performed best when they were parametrized with data that sufficiently sampled environmental parameter space. Second, we show that altering the spatial grain of analysis changes the relative importance of different environmental variables. These changes apparently result from how environmental constraints and the sampling distributions of environmental variables change with spatial grain. Conclusions: These findings have clear relevance for biological inference. Taken together, our results illustrate potentially general limitations for ENMs, especially when such models are used to predict species occurrences in novel environments. We offer basic methodological and conceptual guidelines for appropriate sampling and scale matching. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing.

  13. The bacteriological screening of donated human milk: laboratory experience of British Paediatric Association's published guidelines.

    PubMed

    Wright, K C; Feeney, A M

    1998-01-01

    This study was undertaken to assess the application of the British Paediatric Association's (BPA) published guidelines to the bacteriological screening of breast milk donated to a District General Hospital milk bank. Samples of donated milk were subjected to bacterial counts and provisional identification after both 24 and 48 h incubation on cysteine lactose electrolyte-deficient (CLED) and Columbia blood agar. 21.8% (76 out of 348) donations of milk failed to reach the BPA acceptable criteria. The organisms responsible for the rejection of these samples were all evident within 24 h incubation, and were not significantly confined to one medium. A large percentage of rejected samples originated from a small number of donor mothers; 63.2% came from one donor. In applying BPA guidelines, both CLED and Columbia blood agar were found to be equally effective in screening for unacceptable organisms in prepasteurization donated breast milk. The 24 h period allowed for bacteriological screening, prior to pasteurization of milk samples, was sufficient to allow the growth of all potentially pathogenic bacteria in this study. To prevent the donation of consistently contaminated milk, more active communication between the milk bank staff and the donor is recommended.

  14. A simplified approach for monitoring hydrophobic organic contaminants associated with suspended sediment: Methodology and applications

    USGS Publications Warehouse

    Mahler, B.J.; Van Metre, P.C.

    2003-01-01

    Hydrophobic organic contaminants, although frequently detected in bed sediment and in aquatic biota, are rarely detected in whole-water samples, complicating determination of their occurrence, load, and source. A better approach for the investigation of hydrophobic organic contaminants is the direct analysis of sediment in suspension, but procedures for doing so are expensive and cumbersome. We describe a simple, inexpensive methodology for the dewatering of sediment and present the results of two case studies. Isolation of a sufficient mass of sediment for analyses of organochlorine compounds and PAHs is obtained by in-line filtration of large volumes of water. The sediment is removed from the filters and analyzed directly by standard laboratory methods. In the first case study, suspended-sediment sampling was used to determine occurrence, loads, and yields of contaminants in urban runoff affecting biota in Town Lake, Austin, TX. The second case study used suspended-sediment sampling to locate a point source of PCBs in the Donna Canal in south Texas, where fish are contaminated with PCBs. The case studies demonstrate that suspended-sediment sampling can be an effective tool for determining the occurrence, load, and source of hydrophobic organic contaminants in transport.

  15. Systematic Evaluation of Non-Uniform Sampling Parameters in the Targeted Analysis of Urine Metabolites by 1H,1H 2D NMR Spectroscopy.

    PubMed

    Schlippenbach, Trixi von; Oefner, Peter J; Gronwald, Wolfram

    2018-03-09

    Non-uniform sampling (NUS) allows the accelerated acquisition of multidimensional NMR spectra. The aim of this contribution was the systematic evaluation of the impact of various quantitative NUS parameters on the accuracy and precision of 2D NMR measurements of urinary metabolites. Urine aliquots spiked with varying concentrations (15.6-500.0 µM) of tryptophan, tyrosine, glutamine, glutamic acid, lactic acid, and threonine, which can only be resolved fully by 2D NMR, were used to assess the influence of the sampling scheme, reconstruction algorithm, amount of omitted data points, and seed value on the quantitative performance of NUS in 1 H, 1 H-TOCSY and 1 H, 1 H-COSY45 NMR spectroscopy. Sinusoidal Poisson-gap sampling and a compressed sensing approach employing the iterative re-weighted least squares method for spectral reconstruction allowed a 50% reduction in measurement time while maintaining sufficient quantitative accuracy and precision for both types of homonuclear 2D NMR spectroscopy. Together with other advances in instrument design, such as state-of-the-art cryogenic probes, use of 2D NMR spectroscopy in large biomedical cohort studies seems feasible.

  16. Bayesian focalization: quantifying source localization with environmental uncertainty.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2007-05-01

    This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.

  17. A parametric study of helium retention in beryllium and its effect on deuterium retention

    NASA Astrophysics Data System (ADS)

    Alegre, D.; Baldwin, M. J.; Simmonds, M.; Nishijima, D.; Hollmann, E. M.; Brezinsek, S.; Doerner, R. P.

    2017-12-01

    Beryllium samples have been exposed in the PISCES-B linear plasma device to conditions relevant to the International Thermonuclear Experimental Reactor (ITER) in pure He, D, and D/He mixed plasmas. Except at intermediate sample exposure temperatures (573-673 K) He addition to a D plasma is found to have a beneficial effect as it reduces the D retention in Be (up to ˜55%), although the mechanism is unclear. Retention of He is typically around 1020-1021 He m-2, and is affected primarily by the Be surface temperature during exposition, by the ion fluence at <500 K exposure, but not by the ion impact energy at 573 K. Contamination of the Be surface with high-Z elements from the mask of the sample holder in pure He plasmas is also observed under certain conditions, and leads to unexpectedly large He retention values, as well as changes in the surface morphology. An estimation of the tritium retention in the Be first wall of ITER is provided, being sufficiently low to allow a safe operation of ITER.

  18. Working toward Self-Sufficiency.

    ERIC Educational Resources Information Center

    Caplan, Nathan

    1985-01-01

    Upon arrival in the United States, the Southeast Asian "Boat People" faced a multitude of problems that would seem to have hindered their achieving economic self-sufficiency. Nonetheless, by the time of a 1982 research study which interviewed nearly 1,400 refugee households, 25 percent of all the households in the sample had achieved…

  19. 7 CFR 58.244 - Number of samples.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Number of samples. 58.244 Section 58.244 Agriculture... Procedures § 58.244 Number of samples. As many samples shall be taken from each dryer production lot as is necessary to assure proper composition and quality control. A sufficient number of representative samples...

  20. 7 CFR 58.244 - Number of samples.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Number of samples. 58.244 Section 58.244 Agriculture... Procedures § 58.244 Number of samples. As many samples shall be taken from each dryer production lot as is necessary to assure proper composition and quality control. A sufficient number of representative samples...

  1. Design and Feasibility Assessment of a Retrospective Epidemiological Study of Coal-Fired Power Plant Emissions in the Pittsburgh Pennsylvania Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard A. Bilonick; Daniel Connell; Evelyn Talbott

    2006-12-20

    Eighty-nine (89) percent of the electricity supplied in the 35-county Pittsburgh region (comprising parts of the states of Pennsylvania, Ohio, West Virginia, and Maryland) is generated by coal-fired power plants making this an ideal region in which to study the effects of the fine airborne particulates designated as PM{sub 2.5} emitted by the combustion of coal. This report demonstrates that during the period from 1999-2006 (1) sufficient and extensive exposure data, in particular samples of speciated PM{sub 2.5} components from 1999 to 2003, and including gaseous co-pollutants and weather have been collected, (2) sufficient and extensive mortality, morbidity, and relatedmore » health outcomes data are readily available, and (3) the relationship between health effects and fine particulates can most likely be satisfactorily characterized using a combination of sophisticated statistical methodologies including latent variable modeling (LVM) and generalized linear autoregressive moving average (GLARMA) time series analysis. This report provides detailed information on the available exposure data and the available health outcomes data for the construction of a comprehensive database suitable for analysis, illustrates the application of various statistical methods to characterize the relationship between health effects and exposure, and provides a road map for conducting the proposed study. In addition, a detailed work plan for conducting the study is provided and includes a list of tasks and an estimated budget. A substantial portion of the total study cost is attributed to the cost of analyzing a large number of archived PM{sub 2.5} filters. Analysis of a representative sample of the filters supports the reliability of this invaluable but as-yet untapped resource. These filters hold the key to having sufficient data on the components of PM{sub 2.5} but have a limited shelf life. If the archived filters are not analyzed promptly the important and costly information they contain will be lost.« less

  2. Strategies for Analyzing Sub-Micrometer Features with the FE-EPMA

    NASA Astrophysics Data System (ADS)

    McSwiggen, P.; Armstrong, J. T.; Nielsen, C.

    2013-12-01

    Changes in column design and electronics, as well as new types of spectrometers and analyzing crystals, have significantly advanced electron microprobes, in terms of stability, reproducibility and detection limits. A major advance in spatial resolution has occurred through the use of the field emission electron gun. The spatial resolution of an analysis is controlled by the diameter of the electron beam and the amount of scatter that takes place within the sample. The beam diameter is controlled by the column and type of electron gun being used. The accelerating voltage and the average atomic number/density of the sample control the amount of electron scatter within the sample. However a large electron interaction volume does not necessarily mean a large analytical volume. The beam electrons may spread out within a large volume, but if the electrons lack sufficient energy to produce the X-ray of interest, the analytical volume could be significantly smaller. Therefore there are two competing strategies for creating the smallest analytical volumes. The first strategy is to reduce the accelerating voltage to produce the smallest electron interaction volume. This low kV analytical approach is ultimately limited by the size of the electron beam itself. With a field emission gun, normally the smallest analytical area is achieved at around 5-7 kV. At lower accelerating voltages, the increase in the beam diameter begins to overshadow the reduction in internal scattering. For tungsten filament guns, the smallest analytical volume is reached at higher accelerating voltages. The second strategy is to minimize the overvoltage during the analysis. If the accelerating voltage is only 1-3 kV greater than the critical ionization energy for the X-ray line of interest, then even if the overall electron interaction volume is large, those electrons quickly loose sufficient energy to produce the desired X-rays. The portion of the interaction volume in which the desired X-rays will be produce will be very small and very near the surface. Both strategies have advantages and disadvantages depending on the ultimate goal of the analysis and the elements involved. This work will examine a number of considerations when attempting to decide which approach is best for a given analytical situation. These include: (1) the size of the analytical volumes, (2) minimum detection limits, (3) quality of the matrix corrections, (4) secondary fluorescence, (5) effects of surface contamination, oxide layers, and carbon coatings. This work is based on results largely from the Fe-Ni binary. A simple conclusion cannot be draw as to which strategy is better overall. The determination is highly system dependent. For many mineral systems, both strategies used in combination will produce the best results. Using multiple accelerating voltages to preform a single analysis allows the analyst to optimize their analytical conditions for each element individually.

  3. The K-KIDS Sample: K Dwarfs within 50 Parsecs and the Search for their Closest Companions with CHIRON

    NASA Astrophysics Data System (ADS)

    Paredes-Alvarez, Leonardo; Nusdeo, Daniel Anthony; Henry, Todd J.; Jao, Wei-Chun; Gies, Douglas R.; White, Russel; RECONS Team

    2017-01-01

    To understand fundamental aspects of stellar populations, astronomers need carefully vetted, volume-complete samples. In our K-KIDS effort, our goal is to survey a large sample of K dwarfs for their "kids", companions that may be stellar, brown dwarf, or planetary in nature. Four surveys for companions orbiting an initial set of 1048 K dwarfs with declinations between +30 and -30 have begun. Companions are being detected with separations less than 1 AU out to 10000 AU. Fortuitously, the combination of Hipparcos and Gaia DR1 astrometry with optical photometry from APASS and infrared photometry from 2MASS now allows us to create an effectively volume-complete sample of K dwarfs to a horizon of 50 pc. This sample facilitates rigorous studies of the luminosity and mass functions, as well as comprehensive mapping of the companions orbiting K dwarfs that have never before been possible.Here we present two important results. First, we find that our initial sample of ~1000 K dwarfs can be expanded to 2000-3000 stars in what is an effectively volume-complete sample. This population is sufficiently large to provide superb statistics on the outcomes of star and planet formation processes. Second, initial results from our high-precision radial velocity survey of K dwarfs with the CHIRON spectrograph on the CTIO/SMARTS 1.5m reveal its short-term precision and indicate that stellar, brown dwarf and Jovian planets will be detectable. We present radial velocity curves for an inital sample of 8 K dwarfs with V = 7-10 using cross-correlation techniques on R=80,000 spectra, and illustrate the stability of CHIRON over hours, days, and weeks. Ultimately, the combination of all four surveys will provide an unprecedented portrait of K dwarfs and their kids.This effort has been supported by the NSF through grants AST-1412026 and AST-1517413, and via observations made possible by the SMARTS Consortium

  4. Characterizing sampling and quality screening biases in infrared and microwave limb sounding

    NASA Astrophysics Data System (ADS)

    Millán, Luis F.; Livesey, Nathaniel J.; Santee, Michelle L.; von Clarmann, Thomas

    2018-03-01

    This study investigates orbital sampling biases and evaluates the additional impact caused by data quality screening for the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) and the Aura Microwave Limb Sounder (MLS). MIPAS acts as a proxy for typical infrared limb emission sounders, while MLS acts as a proxy for microwave limb sounders. These biases were calculated for temperature and several trace gases by interpolating model fields to real sampling patterns and, additionally, screening those locations as directed by their corresponding quality criteria. Both instruments have dense uniform sampling patterns typical of limb emission sounders, producing almost identical sampling biases. However, there is a substantial difference between the number of locations discarded. MIPAS, as a mid-infrared instrument, is very sensitive to clouds, and measurements affected by them are thus rejected from the analysis. For example, in the tropics, the MIPAS yield is strongly affected by clouds, while MLS is mostly unaffected. The results show that upper-tropospheric sampling biases in zonally averaged data, for both instruments, can be up to 10 to 30 %, depending on the species, and up to 3 K for temperature. For MIPAS, the sampling reduction due to quality screening worsens the biases, leading to values as large as 30 to 100 % for the trace gases and expanding the 3 K bias region for temperature. This type of sampling bias is largely induced by the geophysical origins of the screening (e.g. clouds). Further, analysis of long-term time series reveals that these additional quality screening biases may affect the ability to accurately detect upper-tropospheric long-term changes using such data. In contrast, MLS data quality screening removes sufficiently few points that no additional bias is introduced, although its penetration is limited to the upper troposphere, while MIPAS may cover well into the mid-troposphere in cloud-free scenarios. We emphasize that the results of this study refer only to the representativeness of the respective data, not to their intrinsic quality.

  5. System and method for liquid extraction electrospray-assisted sample transfer to solution for chemical analysis

    DOEpatents

    Kertesz, Vilmos; Van Berkel, Gary J.

    2016-07-12

    A system for sampling a surface includes a surface sampling probe comprising a solvent liquid supply conduit and a distal end, and a sample collector for suspending a sample collection liquid adjacent to the distal end of the probe. A first electrode provides a first voltage to solvent liquid at the distal end of the probe. The first voltage produces a field sufficient to generate electrospray plume at the distal end of the probe. A second electrode provides a second voltage and is positioned to produce a plume-directing field sufficient to direct the electrospray droplets and ions to the suspended sample collection liquid. The second voltage is less than the first voltage in absolute value. A voltage supply system supplies the voltages to the first electrode and the second electrode. The first electrode can apply the first voltage directly to the solvent liquid. A method for sampling for a surface is also disclosed.

  6. Reflectance of metallic indium for solar energy applications

    NASA Technical Reports Server (NTRS)

    Bouquet, F. L.; Hasegawa, T.

    1984-01-01

    An investigation has been conducted in order to compile quantitative data on the reflective properties of metallic indium. The fabricated samples were of sufficiently high quality that differences from similar second-surface silvered mirrors were not apparent to the human eye. Three second-surface mirror samples were prepared by means of vacuum deposition techniques, yielding indium thicknesses of approximately 1000 A. Both hemispherical and specular measurements were made. It is concluded that metallic indium possesses a sufficiently high specular reflectance to be potentially useful in many solar energy applications.

  7. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  8. A comparison of four-sample slope-intercept and single-sample 51Cr-EDTA glomerular filtration rate measurements.

    PubMed

    Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R

    2018-05-01

    The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.

  9. Comparison of Grab, Air, and Surface Results for Radiation Site Characterization

    NASA Astrophysics Data System (ADS)

    Glassford, Eric Keith

    2011-12-01

    The use of proper sampling methods and sample types for evaluating sites believed to be contaminated with radioactive materials is necessary to avoid misrepresenting conditions at the site. This study was designed to investigate if the site characterization, based upon uranium contamination measured in different types of samples, is dependent upon the mass of the sample collected. A bulk sample of potentially contaminated interior dirt was collected from an abandoned metal processing mill that rolled uranium between 1948 and 1956. The original mill dates from 1910 and has a dirt floor. The bulk sample was a mixture of dirt, black and yellow particles of metal dust, and small fragments of natural debris. Small mass (approximately 0.75 grams (g)) and large mass (approximately 70g) grab samples were prepared from the bulk sample material to simulate collection of a "grab" type sample. Air sampling was performed by re-suspending a portion of the bulk sample material using a vibration table to simulate airborne contamination that might be present during site remediation. Additionally, samples of removable contaminated surface dust were collected on 47 mm diameter filter paper by wiping the surfaces of the exposure chamber used to resuspend the bulk material. Certified reference materials, one containing a precisely known quantity of U 3O8 and one containing a known quantity of natural uranium, were utilized to calibrate the gamma spectrometry measurement system. Non-destructive gamma spectrometry measurements were used to determine the content of uranium-235 (235U) at 185 keV and 143 keV, thorium-234 (234Th) at 63 keV, and protactinium-234m (234mPa) at 1001 keV in each sample. Measurement of natural uranium in small, 1 g samples is usually accomplished by radiochemical analysis in order to measure alpha particles emitted by 238U, 235U, and 234U. However, uranium in larger bulk samples can also be measured non-destructively using gamma spectrometry to detect the low energy photons from 234Th and 234mPa, the short-lived decay products of 238U, and 235U. Two sided t-tests and coefficient of variation were used to compare sampling types. The large grab samples had the lowest calculated coefficient of variation results for activity and atom percentage. The wipe samples had the highest calculated coefficient of variation of mean specific activity (dis/sec/g) for all three energies. The air filter samples had the highest coefficient of variation calculation for mean atom percentage, for both uranium isotopes examined. The data indicated that the large mass sample was the most effective at characterizing the rolling mill radioactive site conditions, since this would indicate which samples had the smallest variations compared to the mean. Additionally, measurement results of natural uranium in the samples indicate that the distribution of radioactive contamination at the sampling location is most likely non-homogeneous and that the size of the sample collected and analyzed must be sufficiently large to insure that the analytical results are truly representative of the activity present.

  10. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  11. The VLT-FLAMES survey of massive stars: wind properties and evolution of hot massive stars in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Mokiem, M. R.; de Koter, A.; Evans, C. J.; Puls, J.; Smartt, S. J.; Crowther, P. A.; Herrero, A.; Langer, N.; Lennon, D. J.; Najarro, F.; Villamariz, M. R.; Vink, J. S.

    2007-04-01

    We have studied the optical spectra of a sample of 28 O- and early B-type stars in the Large Magellanic Cloud, 22 of which are associated with the young star forming region N11. Our observations sample the central associations of LH9 and LH10, and the surrounding regions. Stellar parameters are determined using an automated fitting method (Mokiem et al. 2005), which combines the stellar atmosphere code fastwind (Puls et al. 2005) with the genetic algorithm based optimisation routine pikaia (Charbonneau 1995). We derive an age of 7.0 ± 1.0 and 3.0 ± 1.0 Myr for LH9 and LH10, respectively. The age difference and relative distance of the associations are consistent with a sequential star formation scenario in which stellar activity in LH9 triggered the formation of LH10. Our sample contains four stars of spectral type O2. From helium and hydrogen line fitting we find the hottest three of these stars to be 49{-}54 kK (compared to 45{-}46 kK for O3 stars). Detailed determination of the helium mass fraction reveals that the masses of helium enriched dwarfs and giants derived in our spectroscopic analysis are systematically lower than those implied by non-rotating evolutionary tracks. We interpret this as evidence for efficient rotationally enhanced mixing leading to the surfacing of primary helium and to an increase of the stellar luminosity. This result is consistent with findings for SMC stars by Mokiem et al. (2006). For bright giants and supergiants no such mass discrepancy is found; these stars therefore appear to follow tracks of modestly or non-rotating objects. The set of programme stars was sufficiently large to establish the mass loss rates of OB stars in this Z ˜ 1/2 Z⊙ environment sufficiently accurate to allow for a quantitative comparison with similar objects in the Galaxy and the SMC. The mass loss properties are found to be intermediate to massive stars in the Galaxy and SMC. Comparing the derived modified wind momenta D_mom as a function of luminosity with predictions for LMC metallicities by Vink et al. (2001) yields good agreement in the entire luminosity range that was investigated, i.e. 5.0 < log L/L⊙< 6.1. Appendix A is only available in electronic form at http://www.aanda.org

  12. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets

    PubMed Central

    Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L

    2014-01-01

    Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852

  13. Viscous Analysis of Pulsating Hydrodynamic Instability and Thermal Coupling Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    2000-01-01

    A pulsating form of hydrodynamic instability has recently been shown to arise during liquid-propellant deflagration in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that, when the burning rate is realistically allowed to depend on temperature as well as pressure, sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes like pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wave numbers are sufficiently small. This analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wave number perturbations, the intrinsic pulsating instability for small wave numbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.

  14. Preanalytical Errors in Hematology Laboratory- an Avoidable Incompetence.

    PubMed

    HarsimranKaur, Vikram Narang; Selhi, Pavneet Kaur; Sood, Neena; Singh, Aminder

    2016-01-01

    Quality assurance in the hematology laboratory is a must to ensure laboratory users of reliable test results with high degree of precision and accuracy. Even after so many advances in hematology laboratory practice, pre-analytical errors remain a challenge for practicing pathologists. This study was undertaken with an objective to evaluate the types and frequency of preanalytical errors in hematology laboratory of our center. All the samples received in the Hematology Laboratory of Dayanand Medical College and Hospital, Ludhiana, India over a period of one year (July 2013-July 2014) were included in the study and preanalytical variables like clotted samples, quantity not sufficient, wrong sample, without label, wrong label were studied. Of 471,006 samples received in the laboratory, preanalytical errors, as per the above mentioned categories was found in 1802 samples. The most common error was clotted samples (1332 samples, 0.28% of the total samples) followed by quantity not sufficient (328 sample, 0.06%), wrong sample (96 samples, 0.02%), without label (24 samples, 0.005%) and wrong label (22 samples, 0.005%). Preanalytical errors are frequent in laboratories and can be corrected by regular analysis of the variables involved. Rectification can be done by regular education of the staff.

  15. A rapid analytical method to quantify complex organohalogen contaminant mixtures in large samples of high lipid mammalian tissues.

    PubMed

    Desforges, Jean-Pierre; Eulaers, Igor; Periard, Luke; Sonne, Christian; Dietz, Rune; Letcher, Robert J

    2017-06-01

    In vitro investigations of the health impact of individual chemical compounds have traditionally been used in risk assessments. However, humans and wildlife are exposed to a plethora of potentially harmful chemicals, including organohalogen contaminants (OHCs). An alternative exposure approach to individual or simple mixtures of synthetic OHCs is to isolate the complex mixture present in free-ranging wildlife, often non-destructively sampled from lipid rich adipose. High concentration stock volumes required for in vitro investigations do, however, pose a great analytical challenge to extract sufficient amounts of complex OHC cocktails. Here we describe a novel method to easily, rapidly and efficiently extract an environmentally accumulated and therefore relevant contaminant cocktail from large (10-50 g) marine mammal blubber samples. We demonstrate that lipid freeze-filtration with acetonitrile removes up to 97% of blubber lipids, with minimal effect on the efficiency of OHC recovery. Sample extracts after freeze-filtration were further processed to remove residual trace lipids via high-pressure gel permeation chromatography and solid phase extraction. Average recoveries of OHCs from triplicate analysis of killer whale (Orcinus orca), polar bear (Ursus maritimus) and pilot whale (Globicephala spp.) blubber standard reference material (NIST SRM-1945) ranged from 68 to 80%, 54-92% and 58-145%, respectively, for 13 C-enriched internal standards of six polychlorinated biphenyl congeners, 16 organochlorine pesticides and four brominated flame retardants. This approach to rapidly generate OHC mixtures shows great potential for experimental exposures using complex contaminant mixtures, research or monitoring driven contaminant quantification in biological samples, as well as the untargeted identification of emerging contaminants. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Biobriefcase aerosol collector

    DOEpatents

    Bell, Perry M [Tracy, CA; Christian, Allen T [Madison, WI; Bailey, Christopher G [Pleasanton, CA; Willis, Ladona [Manteca, CA; Masquelier, Donald A [Tracy, CA; Nasarabadi, Shanavaz L [Livermore, CA

    2009-09-22

    A system for sampling air and collecting particles entrained in the air that potentially include bioagents. The system comprises providing a receiving surface, directing a liquid to the receiving surface and producing a liquid surface. Collecting samples of the air and directing the samples of air so that the samples of air with particles entrained in the air impact the liquid surface. The particles potentially including bioagents become captured in the liquid. The air with particles entrained in the air impacts the liquid surface with sufficient velocity to entrain the particles into the liquid but cause minor turbulence. The liquid surface has a surface tension and the collector samples the air and directs the air to the liquid surface so that the air with particles entrained in the air impacts the liquid surface with sufficient velocity to entrain the particles into the liquid, but cause minor turbulence on the surface resulting in insignificant evaporation of the liquid.

  17. Broad Surveys of DNA Viral Diversity Obtained through Viral Metagenomics of Mosquitoes

    PubMed Central

    Ng, Terry Fei Fan; Willner, Dana L.; Lim, Yan Wei; Schmieder, Robert; Chau, Betty; Nilsson, Christina; Anthony, Simon; Ruan, Yijun; Rohwer, Forest; Breitbart, Mya

    2011-01-01

    Viruses are the most abundant and diverse genetic entities on Earth; however, broad surveys of viral diversity are hindered by the lack of a universal assay for viruses and the inability to sample a sufficient number of individual hosts. This study utilized vector-enabled metagenomics (VEM) to provide a snapshot of the diversity of DNA viruses present in three mosquito samples from San Diego, California. The majority of the sequences were novel, suggesting that the viral community in mosquitoes, as well as the animal and plant hosts they feed on, is highly diverse and largely uncharacterized. Each mosquito sample contained a distinct viral community. The mosquito viromes contained sequences related to a broad range of animal, plant, insect and bacterial viruses. Animal viruses identified included anelloviruses, circoviruses, herpesviruses, poxviruses, and papillomaviruses, which mosquitoes may have obtained from vertebrate hosts during blood feeding. Notably, sequences related to human papillomaviruses were identified in one of the mosquito samples. Sequences similar to plant viruses were identified in all mosquito viromes, which were potentially acquired through feeding on plant nectar. Numerous bacteriophages and insect viruses were also detected, including a novel densovirus likely infecting Culex erythrothorax. Through sampling insect vectors, VEM enables broad survey of viral diversity and has significantly increased our knowledge of the DNA viruses present in mosquitoes. PMID:21674005

  18. A LOW-E MAGIC ANGLE SPINNING PROBE FOR BIOLOGICAL SOLID STATE NMR AT 750 MHz

    PubMed Central

    McNeill, Seth A.; Gor’kov, Peter L.; Shetty, Kiran; Brey, William W.; Long, Joanna R.

    2009-01-01

    Crossed-coil NMR probes are a useful tool for reducing sample heating for biological solid state NMR. In a crossed-coil probe, the higher frequency 1H field, which is the primary source of sample heating in conventional probes, is produced by a separate low-inductance resonator. Because a smaller driving voltage is required, the electric field across the sample and the resultant heating is reduced. In this work we describe the development of a magic angle spinning (MAS) solid state NMR probe utilizing a dual resonator. This dual resonator approach, referred to as “Low-E,” was originally developed to reduce heating in samples of mechanically aligned membranes. The study of inherently dilute systems, such as proteins in lipid bilayers, via MAS techniques requires large sample volumes at high field to obtain spectra with adequate signal-to-noise ratio under physiologically relevant conditions. With the Low-E approach, we are able to obtain homogeneous and sufficiently strong radiofrequency fields for both 1H and 13C frequencies in a 4 mm probe with a 1H frequency of 750 MHz. The performance of the probe using windowless dipolar recoupling sequences is demonstrated on model compounds as well as membrane embedded peptides. PMID:19138870

  19. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  20. High-Resolution Large Field-of-View FUV Compact Camera

    NASA Technical Reports Server (NTRS)

    Spann, James F.

    2006-01-01

    The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.

  1. Stardust in STARDUST - the C, N, and O Isotopic Compositions of Wild 2 Cometary Matter in Al Foil Impacts

    NASA Technical Reports Server (NTRS)

    Stadermann, Frank J.; Hoppe, Peter; Floss, Christine; Heck, Philipp R.; Hoerz, Friedrich; Huth, Joachim; Kearsley, Anton T.; Leitner, Jan; Marhas, Kuljeet K.; McKeegan, Kevin D.; hide

    2007-01-01

    In January 2006, the Stardust mission successfully returned dust samples from the tail of comet 81P/Wild 2 in two principal collection media, low density silica aerogel and Al foil. While hypervelocity impacts at the Stardust encounter velocity of 6.1 kilometers per second into Al foils are generally highly disruptive for natural, silicate-dominated impactors, previous studies have shown that many craters retain sufficient residue to allow a determination of the elemental and isotopic compositions of the original projectile. We have used two NanoSIMS ion microprobes to perform C, N, and O isotope imaging measurements on four large (59-295 micrometer diameter) and on 47 small (0.32-1.9 micrometer diameter) Al foil impact craters as part of the Stardust Preliminary Examination. Most analyzed residues in and around these craters are isotopically normal (solar) in their C, N, and O isotopic compositions. However, the debris in one large crater shows an average N-15 enrichment of approx. 450%o, which is similar to the bulk composition of some isotopically primitive interplanetary dust particles and to components of some primitive meteorites. A 250 nm grain in another large crater has an O-17 enrichment with approx. 2.65 times the solar O-17/O-16 ratio. Such an O isotopic composition is typical for circumstellar oxide or silicate grains from red giant or asymptotic giant branch stars. The discovery of this circumstellar grain clearly establishes that there is authentic stardust in the cometary samples returned by the Stardust mission. However, the low apparent abundance of circumstellar grains in Wild 2 samples and the preponderance of isotopically normal material indicates that the cometary matter is a diverse assemblage of presolar and solar system materials.

  2. Stardust in STARDUST - the C, N, and O Isotopic Compositions of Wild 2 Cometary Matter in Al foil Impacts

    NASA Technical Reports Server (NTRS)

    Stadermann, Frank J.; Hoppe, Peter; Floss, Christine; Hoerz, Friedrich; Huth, Joachim; Kearsley, Anton T.; Leitner, Jan; Marhas, Kuljeet K.; McKeegan, Kevin D.; Stephan, Thomas; hide

    2007-01-01

    In January 2006, the STARDUST mission successfully returned dust samples from the tail of comet 81P/Wild 2 in two principal collection media, low density silica aerogel and Al foil. While hypervelocity impacts at 6.1 km/s, the encounter velocity of STARDUST, into Al foils are generally highly disruptive for natural, silicate-dominated impactors, previous studies have shown that many craters retain sufficient residue to allow a determination of the elemental and isotopic compositions of the original projectile. We have used the NanoSIMS to perform C, N, and O isotope imaging measurements on four large (59-370 microns diameter) and on 47 small (0.32-1.9 microns diameter) Al foil impact craters as part of the STARDUST Preliminary Examination. Most analyzed residues in and around these craters are isotopically normal (solar) in their C, N, and O isotopic compositions. However, the debris in one large crater shows an average 15N enrichment of approx. 450 %, which is similar to the bulk composition of some isotopically primitive interplanetary dust particles. A 250 nm grain in another large crater has an O-17 enrichment with approx. 2.65 times the solar O-17/O-16 ratio. Such an O isotopic composition is typical for circumstellar oxide or silicate grains from red giant or asymptotic giant branch stars. The discovery of this circumstellar grain clearly establishes that there is authentic stardust in the cometary samples returned by the STARDUST mission. However, the low apparent abundance of circumstellar grains in Wild 2 samples and the preponderance of isotopically normal material indicates that the cometary matter is a diverse assemblage of presolar and solar system materials.

  3. A post-mortem survey on end-of-life decisions using a representative sample of death certificates in Flanders, Belgium: research protocol

    PubMed Central

    Chambaere, Kenneth; Bilsen, Johan; Cohen, Joachim; Pousset, Geert; Onwuteaka-Philipsen, Bregje; Mortier, Freddy; Deliens, Luc

    2008-01-01

    Background Reliable studies of the incidence and characteristics of medical end-of-life decisions with a certain or possible life shortening effect (ELDs) are indispensable for an evidence-based medical and societal debate on this issue. This article presents the protocol drafted for the 2007 ELD Study in Flanders, Belgium, and outlines how the main aims and challenges of the study (i.e. making reliable incidence estimates of end-of-life decisions, even rare ones, and describing their characteristics; allowing comparability with past ELD studies; guaranteeing strict anonymity given the sensitive nature of the research topic; and attaining a sufficient response rate) are addressed in a post-mortem survey using a representative sample of death certificates. Study design Reliable incidence estimates are achievable by using large at random samples of death certificates of deceased persons in Flanders (aged one year or older). This entails the cooperation of the appropriate administrative authorities. To further ensure the reliability of the estimates and descriptions, especially of less prevalent end-of-life decisions (e.g. euthanasia), a stratified sample is drawn. A questionnaire is sent out to the certifying physician of each death sampled. The questionnaire, tested thoroughly and avoiding emotionally charged terms is based largely on questions that have been validated in previous national and European ELD studies. Anonymity of both patient and physician is guaranteed through a rigorous procedure, involving a lawyer as intermediary between responding physicians and researchers. To increase response we follow the Total Design Method (TDM) with a maximum of three follow-up mailings. Also, a non-response survey is conducted to gain insight into the reasons for lack of response. Discussion The protocol of the 2007 ELD Study in Flanders, Belgium, is appropriate for achieving the objectives of the study; as past studies in Belgium, the Netherlands, and other European countries have shown, strictly anonymous and thorough surveys among physicians using a large, stratified, and representative death certificate sample are most suitable in nationwide studies of incidence and characteristics of end-of-life decisions. There are however also some limitations to the study design. PMID:18752659

  4. 12 CFR 715.8 - Requirements for verification of accounts and passbooks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... selection: (ii) A sample which is representative of the population from which it was selected; (iii) An equal chance of selecting each dollar in the population; (iv) Sufficient accounts in both number and... consistent with GAAS if such methods provide for: (i) Sufficient accounts in both number and scope on which...

  5. Empirical evaluation of sufficient similarity in dose-response for environmental risk assessment of a mixture of 11 pyrethroids.

    EPA Science Inventory

    Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...

  6. Self-assembled ordered structures in thin films of HAT5 discotic liquid crystal.

    PubMed

    Morales, Piero; Lagerwall, Jan; Vacca, Paolo; Laschat, Sabine; Scalia, Giusy

    2010-05-20

    Thin films of the discotic liquid crystal hexapentyloxytriphenylene (HAT5), prepared from solution via casting or spin-coating, were investigated by atomic force microscopy and polarizing optical microscopy, revealing large-scale ordered structures substantially different from those typically observed in standard samples of the same material. Thin and very long fibrils of planar-aligned liquid crystal were found, possibly formed as a result of an intermediate lyotropic nematic state arising during the solvent evaporation process. Moreover, in sufficiently thin films the crystallization seems to be suppressed, extending the uniform order of the liquid crystal phase down to room temperature. This should be compared to the bulk situation, where the same material crystallizes into a polymorphic structure at 68 °C.

  7. Experimental studies of a zeeman-tuned xenon laser differential absorption apparatus.

    PubMed

    Linford, G J

    1973-06-01

    A Zeeman-tuned cw xenon laser differential absorption device is described. The xenon laser was tuned by axial magnetic fields up to 5500 G generated by an unusually large water-cooled dc solenoid. Xenon laser lines at 3.37 micro, 3.51 micro, and 3.99 micro were tuned over ranges of 6 A, 6 A, and 11 A, respectively. To date, this apparatus has been used principally to study the details of formaldehyde absorption lines lying near the 3 .508-micro xenon laser transition. These experiments revealed that the observed absorption spectrum of formaldehyde exhibits a sufficiently unique spectral structure that the present technique may readily be used to measure relative concentrations of formaldehyde in samples of polluted air.

  8. Estimation of breeding values using selected pedigree records.

    PubMed

    Morton, Richard; Howarth, Jordan M

    2005-06-01

    Fish bred in tanks or ponds cannot be easily tagged individually. The parentage of any individual may be determined by DNA fingerprinting, but is sufficiently expensive that large numbers cannot be so finger-printed. The measurement of the objective trait can be made on a much larger sample relatively cheaply. This article deals with experimental designs for selecting individuals to be finger-printed and for the estimation of the individual and family breeding values. The general setup provides estimates for both genetic effects regarded as fixed or random and for fixed effects due to known regressors. The family effects can be well estimated when even very small numbers are finger-printed, provided that they are the individuals with the most extreme phenotypes.

  9. Toward detecting California shrubland canopy chemistry with AIS data

    NASA Technical Reports Server (NTRS)

    Price, Curtis V.; Westman, Walter E.

    1987-01-01

    Airborne Imaging Spectrometer (AIS)-2 data of coastal sage scrub vegetation were examined for fine spectral features that might be used to predict concentrations of certain canopy chemical constituents. A Fourier notch filter was applied to the AIS data and the TREE and ROCK mode spectra were ratioed to a flat field. Portions of the resulting spectra resemble spectra for plant cellulose and starch in that both show reduced reflectance at 2100 and 2270 nm. The latter are regions of absorption of energy by organic bonds found in starch and cellulose. Whether the relationship is sufficient to predict the concentration of these chemicals from AIS spectra will require testing of the predictive ability of these wavebands with large field sample sizes.

  10. Effect of simethicone on lactulose-induced H2 production and gastrointestinal symptoms.

    PubMed

    Friis, H; Bodé, S; Rumessen, J J; Gudmand-Høyer, E

    1991-01-01

    The results of studies of the effect of simethicone on abdominal gas-related symptoms have been contradictory. In a randomized, double-blind cross-over study, 10 healthy volunteers were given 30 g lactulose and 600 mg simethicone or placebo. End-expiratory breath samples were collected and analyzed for H2 and gastrointestinal symptoms registered. There were no differences in biochemical parameters or symptom score between simethicone and placebo. In contrast to previous studies, we used a sufficiently large dose of lactulose to produce gastrointestinal symptoms, a higher dose of simethicone and placebo tablets containing the same additives as the simethicone tablets. There was no demonstrable effect of simethicone on symptoms or intestinal gas production caused by carbohydrate malabsorption.

  11. Symmetry-breaking phase transitions in highly concentrated semen

    PubMed Central

    Creppy, Adama; Plouraboué, Franck; Praud, Olivier; Druart, Xavier; Cazin, Sébastien; Yu, Hui

    2016-01-01

    New experimental evidence of self-motion of a confined active suspension is presented. Depositing fresh semen sample in an annular shaped microfluidic chip leads to a spontaneous vortex state of the fluid at sufficiently large sperm concentration. The rotation occurs unpredictably clockwise or counterclockwise and is robust and stable. Furthermore, for highly active and concentrated semen, richer dynamics can occur such as self-sustained or damped rotation oscillations. Experimental results obtained with systematic dilution provide a clear evidence of a phase transition towards collective motion associated with local alignment of spermatozoa akin to the Vicsek model. A macroscopic theory based on previously derived self-organized hydrodynamics models is adapted to this context and provides predictions consistent with the observed stationary motion. PMID:27733694

  12. FIRST MAGNETIC FIELD MODELS FOR RECENTLY DISCOVERED MAGNETIC {beta} CEPHEI AND SLOWLY PULSATING B STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubrig, S.; Ilyin, I.; Schoeller, M.

    2011-01-01

    In spite of recent detections of magnetic fields in a number of {beta} Cephei and slowly pulsating B (SPB) stars, their impact on stellar rotation, pulsations, and element diffusion has not yet been sufficiently studied. The reason for this is the lack of knowledge of rotation periods, the magnetic field strength distribution and temporal variability, and the field geometry. New longitudinal field measurements of four {beta} Cephei and candidate {beta} Cephei stars, and two SPB stars were acquired with FORS 2 at the Very Large Telescope. These measurements allowed us to carry out a search for rotation periods and tomore » constrain the magnetic field geometry for four stars in our sample.« less

  13. A Thick Target for Synchrotrons and Betatrons

    DOE R&D Accomplishments Database

    McMillan, E. M.

    1950-09-19

    If a wide x-ray beam from an electron synchrotron or betatron is desired, in radiographic work with large objects for example, the usually very thin target may be replaced by a thick one, provided the resulting distortion of the x-ray spectrum due to multiple radiative processes is permissible. It is difficult to make the circulating electron beam traverse a thick target directly because of the small spacing between successive turns. Mounting a very thin beryllium, or other low-z material, fin on the edge of the thick target so that the fin projects into the beam will cause the beam to lose sufficient energy, and therefore radium, to strike the thick target the next time around. Sample design calculations are given.

  14. The X-Ray Light Curve of the Very Luminous Supernova SN 1978K in NGC 1313

    NASA Astrophysics Data System (ADS)

    Schlegel, Eric M.; Petre, R.; Colbert, E. J. M.

    1996-01-01

    We present the 0.5-2.0 keV light curve of the X-ray luminous supernova SN 1978K in NGC 1313, based on six ROSAT observations spanning 1990 July to t994 July. SN 1978K is one of a few supernovae or supernova remnants that are very luminous (˜1039-1040 ergs s-1) in the X-ray, optical, and radio bands, and the first, at a supernova age of 10-20 yr, for which sufficient data exist to create an X-ray light curve. The X-ray flux is approximately constant over the 4 yr sampled by our observations, which were obtained 12-16 yr after the initial explosion. Three models exist to explain the large X-ray luminosity: pulsar input, a reverse shock running back into the expanding debris of the supernova, and the outgoing shock crushing of cloudlets in the debris field. Based upon calculations of Chevalier & Fransson, a pulsar cannot provide sufficient energy to produce the soft X-ray luminosity. Based upon the models and the light curve to date, it is not possible to discern the evolutionary phase of the supernova.

  15. Amendment to examination and investigation sample requirements--FDA. Direct final rule.

    PubMed

    1998-09-25

    The Food and Drug Administration (FDA) is amending its regulations regarding the collection of twice the quantity of food, drug, or cosmetic estimated to be sufficient for analysis. This action increases the dollar amount that FDA will consider to determine whether to routinely collect a reserve sample of a food, drug, or cosmetic product in addition to the quantity sufficient for analysis. Experience has demonstrated that the current dollar amount does not adequately cover the cost of most quantities sufficient for analysis plus reserve samples. This direct final rule is part of FDA's continuing effort to achieve the objectives of the President's "Reinventing Government" initiative, and is intended to reduce the burden of unnecessary regulations on food, drugs, and cosmetics without diminishing the protection of the public health. Elsewhere in this issue of the Federal Register, FDA is publishing a companion proposed rule under FDA's usual procedures for notice and comment to provide a procedural framework to finalize the rule in the event the agency receives any significant adverse comment and withdraws this direct final rule.

  16. ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS

    EPA Science Inventory

    Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...

  17. 40 CFR 265.91 - Ground-water monitoring system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sufficient to yield ground-water samples that are: (i) Representative of background ground-water quality in... not required provided that provisions for sampling upgradient and downgradient water quality will... perforated, and packed with gravel or sand where necessary, to enable sample collection at depths where...

  18. Large-Scale Transient Transfection of Suspension Mammalian Cells for VLP Production.

    PubMed

    Cervera, Laura; Kamen, Amine A

    2018-01-01

    Large-scale transient transfection of mammalian cell suspension cultures enables the production of biological products in sufficient quantity and under stringent quality attributes to perform accelerated in vitro evaluations and has the potential to support preclinical or even clinical studies. Here we describe the methodology to produce VLPs in a 3L bioreactor, using suspension HEK 293 cells and PEIPro as a transfection reagent. Cells are grown in the bioreactor to 1 × 10 6 cells/mL and transfected with a plasmid DNA-PEI complex at a ratio of 1:2. Dissolved oxygen and pH are controlled and are online monitored during the production phase and cell growth and viability can be measured off line taking samples from the bioreactor. If the product is labeled with a fluorescent marker, transfection efficiency can be also assessed using flow cytometry analysis. Typically, the production phase lasts between 48 and 96 h until the product is harvested.

  19. Natural snowfall reveals large-scale flow structures in the wake of a 2.5-MW wind turbine.

    PubMed

    Hong, Jiarong; Toloui, Mostafa; Chamorro, Leonardo P; Guala, Michele; Howard, Kevin; Riley, Sean; Tucker, James; Sotiropoulos, Fotis

    2014-06-24

    To improve power production and structural reliability of wind turbines, there is a pressing need to understand how turbines interact with the atmospheric boundary layer. However, experimental techniques capable of quantifying or even qualitatively visualizing the large-scale turbulent flow structures around full-scale turbines do not exist today. Here we use snowflakes from a winter snowstorm as flow tracers to obtain velocity fields downwind of a 2.5-MW wind turbine in a sampling area of ~36 × 36 m(2). The spatial and temporal resolutions of the measurements are sufficiently high to quantify the evolution of blade-generated coherent motions, such as the tip and trailing sheet vortices, identify their instability mechanisms and correlate them with turbine operation, control and performance. Our experiment provides an unprecedented in situ characterization of flow structures around utility-scale turbines, and yields significant insights into the Reynolds number similarity issues presented in wind energy applications.

  20. Parameter determination of hereditary models of deformation of composite materials based on identification method

    NASA Astrophysics Data System (ADS)

    Kayumov, R. A.; Muhamedova, I. Z.; Tazyukov, B. F.; Shakirzjanov, F. R.

    2018-03-01

    In this paper, based on the analysis of some experimental data, a study and selection of hereditary models of deformation of reinforced polymeric composite materials, such as organic plastic, carbon plastic and a matrix of film-fabric composite, was pursued. On the basis of an analysis of a series of experiments it has been established that organo-plastic samples behave like viscoelastic bodies. It is shown that for sufficiently large load levels, the behavior of the material in question should be described by the relations of the nonlinear theory of heredity. An attempt to describe the process of deformation by means of linear relations of the theory of heredity leads to large discrepancies between the experimental and calculated deformation values. The use of the theory of accumulation of micro-damages leads to much better description of the experimental results. With the help of the hierarchical approach, a good approximation of the experimental values was successful only in the first three sections of loading.

  1. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    PubMed Central

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  2. Consensus for second-order multi-agent systems with position sampled data

    NASA Astrophysics Data System (ADS)

    Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng

    2016-10-01

    In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).

  3. An efficient technique for determining apparent temperature distributions from antenna temperature measurements

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.; Fung, A. K.

    1973-01-01

    A method by which the apparent microwave temperature characteristic of a flat scene is estimated from dual polarized measurements is derived and interpreted. Approximate linear relationships between antenna and apparent temperatures are established by weighting emission components in spherical bands under the assumption that the surface is isotropic. The weighting factors are formed by integrating the antenna pattern functions over these bands. The vector aspect of the formulation is retained to account for the difference between the definition of the antenna polarizations and the polarizations of the emitted fields. The method is largely applicable to the measurement of smooth temperature distributions by an antenna having good spatial resolution of the distributions and is considered efficient for inverting large volumes of measurements. Sample cases are presented and the implications of these cases on remote radiometer observations are discussed. It is shown that cross-coupling occurs between the polarizations of the emitted fields and the polarizations of the antenna. For this reason and because practical antennas have cross-polarized patterns associated with them, it is necessary to conduct measurements at both horizontal and vertical polarizations to realize the inversion. It is also made evident that thorough inversions require that the apparent temperatures be sampled at a sufficient number of points between nadir and zenith.

  4. Code-division-multiplexed readout of large arrays of TES microcalorimeters

    NASA Astrophysics Data System (ADS)

    Morgan, K. M.; Alpert, B. K.; Bennett, D. A.; Denison, E. V.; Doriese, W. B.; Fowler, J. W.; Gard, J. D.; Hilton, G. C.; Irwin, K. D.; Joe, Y. I.; O'Neil, G. C.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Swetz, D. S.

    2016-09-01

    Code-division multiplexing (CDM) offers a path to reading out large arrays of transition edge sensor (TES) X-ray microcalorimeters with excellent energy and timing resolution. We demonstrate the readout of X-ray TESs with a 32-channel flux-summed code-division multiplexing circuit based on superconducting quantum interference device (SQUID) amplifiers. The best detector has energy resolution of 2.28 ± 0.12 eV FWHM at 5.9 keV and the array has mean energy resolution of 2.77 ± 0.02 eV over 30 working sensors. The readout channels are sampled sequentially at 160 ns/row, for an effective sampling rate of 5.12 μs/channel. The SQUID amplifiers have a measured flux noise of 0.17 μΦ0/√Hz (non-multiplexed, referred to the first stage SQUID). The multiplexed noise level and signal slew rate are sufficient to allow readout of more than 40 pixels per column, making CDM compatible with requirements outlined for future space missions. Additionally, because the modulated data from the 32 SQUID readout channels provide information on each X-ray event at the row rate, our CDM architecture allows determination of the arrival time of an X-ray event to within 275 ns FWHM with potential benefits in experiments that require detection of near-coincident events.

  5. Spectral Learning for Supervised Topic Models.

    PubMed

    Ren, Yong; Wang, Yining; Zhu, Jun

    2018-03-01

    Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.

  6. Factors affecting levels of circulating cell-free fetal DNA in maternal plasma and their implications for noninvasive prenatal testing.

    PubMed

    Kinnings, Sarah L; Geis, Jennifer A; Almasri, Eyad; Wang, Huiquan; Guan, Xiaojun; McCullough, Ron M; Bombard, Allan T; Saldivar, Juan-Sebastian; Oeth, Paul; Deciu, Cosmin

    2015-08-01

    Sufficient fetal DNA in a maternal plasma sample is required for accurate aneuploidy detection via noninvasive prenatal testing, thus highlighting a need to understand the factors affecting fetal fraction. The MaterniT21™ PLUS test uses massively parallel sequencing to analyze cell-free fetal DNA in maternal plasma and detect chromosomal abnormalities. We assess the impact of a variety of factors, both maternal and fetal, on the fetal fraction across a large number of samples processed by Sequenom Laboratories. The rate of increase in fetal fraction with increasing gestational age varies across the duration of the testing period and is also influenced by fetal aneuploidy status. Maternal weight trends inversely with fetal fraction, and we find no added benefit from analyzing body mass index or blood volume instead of weight. Strong correlations exist between fetal fractions from aliquots taken from the same patient at the same blood draw and also at different blood draws. While a number of factors trend with fetal fraction across the cohort as a whole, they are not the sole determinants of fetal fraction. In this study, the variability for any one patient does not appear large enough to justify postponing testing to a later gestational age. © 2015 John Wiley & Sons, Ltd.

  7. Adaptation of micro-diffusion method for the analysis of (15) N natural abundance of ammonium in samples with small volume.

    PubMed

    Zhang, Shasha; Fang, Yunting; Xi, Dan

    2015-07-30

    There are several preparation methods for the measurement of the nitrogen (N) isotopic composition of ammonium (NH4 (+) ) in different types of samples (freshwater, saltwater and soil extracts). The diffusion method is the most popular and it involves NH4 (+) in solutions being released under alkaline conditions and then immediately trapped by an acidified filter. However, the traditional preparation is designed for samples with large volume and relatively high N concentrations. The performance of diffusion for small-volume samples (e.g., a few milliliters) remains unknown. We examined the overall performance of micro-diffusion on 5 mL samples on varying the incubation time, temperature and initial NH4 (+) concentration. The trapped ammonia was chemically converted into nitrous oxide (N2 O) with hypobromite and hydroxylamine in sequence. The produced N2 O was analyzed by a commercially available purge and cryogenic trap system coupled to an isotope ratio mass spectrometer. We found that diffusion can be complete with no more than 7 days of treatment at 37 °C. Increasing the temperature to 50 °C and the incubation time to 11 days did not improve the overall performance. There were no significant differences in the overall performance during diffusion with NH4 (+) concentrations from 15 to 60 μM. The blank size was relatively large, and the N contamination might come from the reagents especially KCl salts. The method presented here combines micro-diffusion and hypobromite oxidation and hydroxylamine reduction. It is suitable for samples with small volume and low NH4 (+) concentrations. Our study demonstrates that the NH4 (+) concentrations in samples can be as low as 15 μM, and a volume of 5 mL is sufficient for this method. We suggest that this method can be used for the routine determination of (15) N/(14) N for either natural abundance or (15) N-enriched NH4 (+) . Copyright © 2015 John Wiley & Sons, Ltd.

  8. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge

    PubMed Central

    Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.

    2014-01-01

    As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704

  9. INTEGRITY OF VOA-VIAL SEALS | Science Inventory | US ...

    EPA Pesticide Factsheets

    Preservation of soil samples for the analysis of volatile organic compounds (VOCs) requires both the inhibition of VOC degradation and the restriction of vapor movement in or out of the sample container. Clear, 40,mL glass VOA vials manufactured by the four major U.S. glass manufacturers were tested for seal integrity. Visual inspection revealed a variety of imperfections ranging from small indentations, bumps, and scratches on vial threads or lips, through obvious defects, such as large indentations or grooves in the vial lips and chipped or broken glass. The aluminum plate vacuum test proved to be unreliable in identifying potentially leaky vials. The septa-seal vacuum test was conducted twice on the 80 selected vials.Mean VOC concentrations after 14 days storage generally were within +- 20% of the known concentration with a majority of the concentrations within +- 15% of their known values. There were no statistically significant differences in VOC concentrations between vials in the potentially leaky and control group for any of the manufacturers. Only 1 vial lost VOCs and that was due to a large chip in the vial's lip and neck. These findings indicate that the silicone septa are flexible enough to overcome most vial imperfections and form a complete seal against VOC loss. A careful inspection of the VOA vials prior to use to remove any vials with large and obvious imperfections should be sufficient to screen out vials that are subject to VOC losses.

  10. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  11. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2016-10-01

    Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.

  12. Genotyping of ancient Mycobacterium tuberculosis strains reveals historic genetic diversity.

    PubMed

    Müller, Romy; Roberts, Charlotte A; Brown, Terence A

    2014-04-22

    The evolutionary history of the Mycobacterium tuberculosis complex (MTBC) has previously been studied by analysis of sequence diversity in extant strains, but not addressed by direct examination of strain genotypes in archaeological remains. Here, we use ancient DNA sequencing to type 11 single nucleotide polymorphisms and two large sequence polymorphisms in the MTBC strains present in 10 archaeological samples from skeletons from Britain and Europe dating to the second-nineteenth centuries AD. The results enable us to assign the strains to groupings and lineages recognized in the extant MTBC. We show that at least during the eighteenth-nineteenth centuries AD, strains of M. tuberculosis belonging to different genetic groups were present in Britain at the same time, possibly even at a single location, and we present evidence for a mixed infection in at least one individual. Our study shows that ancient DNA typing applied to multiple samples can provide sufficiently detailed information to contribute to both archaeological and evolutionary knowledge of the history of tuberculosis.

  13. Evaluating multiple determinants of the structure of plant-animal mutualistic networks.

    PubMed

    Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano

    2009-08-01

    The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.

  14. Impact Processes in the Solar System

    NASA Technical Reports Server (NTRS)

    Ahrens, Thomas J.

    2004-01-01

    Our laboratory has previously conducted impact fracture and dynamic failure tests. Polanskey and Ahrens [1990] mapped the fractures from a series of laboratory craters (Fig. 1) and Ahrens and Rubin [ 1993] inferred that the usually further extending radial cracks resulted from tensional failure during the compression of the shock propagation. The radial spreading induced by the particle velocity field caused the stresses perpendicular to the shock front to become sufficiently large and tensile. This induces "radial fractures." The concentric fractures are attributed to the tensional failure occurring after the initial compressive phase. Upon radial propagation of the stress wave the negative tension behind the stress-wave front caused failure along the quasi-spherical concentric fractures. The near-surface and spall fractures are attributed to the fractures described by Melosh [1984]. These are activated by impact and can launch relatively unshocked samples of planetary surfaces to speeds exceeding escape velocity. In the case of Mars, some of these surface samples presumably become the SNC (Mars) meteorites.

  15. Polar Wind Measurements with TIDE/PSI and HYDRA on the Polar Spacecraft

    NASA Technical Reports Server (NTRS)

    Su, Y. J.; Horwitz, J. L.; Moore, Thomas E.; Giles, Barbara L.; Chandler, Michael O.; Craven, Paul D.; Chang, S.-W.; Scudder, J.

    1998-01-01

    The Thermal Ion Dynamics Experiment (TIDE) on the POLAR spacecraft has allowed sampling of the three-dimensional ion distributions with excellent energy, angular, and mass resolution. The companion Plasma Source Instrument, when operated, allows sufficient diminution of the electric potential to observe the polar wind at very high altitudes. In this presentation, we will describe the results of polar wind characteristics H+, He+, and 0+ as observed by TIDE at 5000 km and 8 RE altitudes. The relationship of the polar wind parameters with the solar zenith angle and with the day-night distance in the Solar Magnetic coordinate system will also be presented. We will compare these measurements with recent simulations of the photoelectron-driven polar wind using a couple fluid-semikinetic model. In addition, we will compare these polar wind observations with low-energy electrons sampled by the HYDRA experiment on POLAR to examine possible effects of the polar rain and photoelectrons and hopefully explain the large ion outflow velocity variations at POLAR apogee.

  16. The Treatment Motivation Scales for forensic outpatient treatment (TMS-F): construction and psychometric evaluation.

    PubMed

    Drieschner, Klaus H; Boomsma, Anne

    2008-06-01

    The Treatment Motivation Scales for forensic outpatient treatment (TMS-F) is a Dutch 85-item self-report questionnaire for the motivation of forensic outpatients to engage in their treatment and six cognitive and affective determinants of this motivation. Following descriptions of the conceptual basis and construction, the psychometric properties of the TMS-F are evaluated in two studies. In Study 1 (N = 378), the factorial structure of the instrument and the dimensionality of its scales are evaluated by confirmative factor analysis. In Study 2 with a new sample (N = 376), the results of Study 1 are largely confirmed. It is found that the factorial structure of the TMS-F is in accordance with expectations, that all scales are sufficiently homogeneous and reliable to interpret the sum scores, and that these results are stable across independent samples. The relative importance of the six determinants of the motivation to engage in the treatment and the generalizability of the results are discussed.

  17. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples

    PubMed Central

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David

    2017-01-01

    Background Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. Methodology/Principal findings A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74–0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65–88%), compared to the sensitivity (91–100%) of the new molecular diagnostic workflow. Conclusions/Significance Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited. PMID:28915255

  18. A simplified field protocol for genetic sampling of birds using buccal swabs

    USGS Publications Warehouse

    Vilstrup, Julia T.; Mullins, Thomas D.; Miller, Mark P.; McDearman, Will; Walters, Jeffrey R.; Haig, Susan M.

    2018-01-01

    DNA sampling is an essential prerequisite for conducting population genetic studies. For many years, blood sampling has been the preferred method for obtaining DNA in birds because of their nucleated red blood cells. Nonetheless, use of buccal swabs has been gaining favor because they are less invasive yet still yield adequate amounts of DNA for amplifying mitochondrial and nuclear markers; however, buccal swab protocols often include steps (e.g., extended air-drying and storage under frozen conditions) not easily adapted to field settings. Furthermore, commercial extraction kits and swabs for buccal sampling can be expensive for large population studies. We therefore developed an efficient, cost-effective, and field-friendly protocol for sampling wild birds after comparing DNA yield among 3 inexpensive buccal swab types (2 with foam tips and 1 with a cotton tip). Extraction and amplification success was high (100% and 97.2% respectively) using inexpensive generic swabs. We found foam-tipped swabs provided higher DNA yields than cotton-tipped swabs. We further determined that omitting a drying step and storing swabs in Longmire buffer increased efficiency in the field while still yielding sufficient amounts of DNA for detailed population genetic studies using mitochondrial and nuclear markers. This new field protocol allows time- and cost-effective DNA sampling of juveniles or small-bodied birds for which drawing blood may cause excessive stress to birds and technicians alike.

  19. Identifying personal microbiomes using metagenomic codes

    PubMed Central

    Franzosa, Eric A.; Huang, Katherine; Meadow, James F.; Gevers, Dirk; Lemon, Katherine P.; Bohannan, Brendan J. M.; Huttenhower, Curtis

    2015-01-01

    Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30–300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability—a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability. PMID:25964341

  20. Electrophoretic sample insertion. [device for uniformly distributing samples in flow path

    NASA Technical Reports Server (NTRS)

    Mccreight, L. R. (Inventor)

    1974-01-01

    Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.

  1. 40 CFR 86.1537 - Idle test run.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Heavy-Duty Engines, New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled... dilute sampling. (6) For bag sampling, sample idle emissions long enough to obtain a sufficient bag...

  2. From Cosmic Dusk till Dawn with RELICS

    NASA Astrophysics Data System (ADS)

    Bradac, Marusa

    When did galaxies start forming stars? What is the role of distant galaxies in galaxy formation models and the epoch of reionization? What are the conditions in typical lowmass, star-forming galaxies at z 4? Why is galaxy evolution dependent on environment? Recent observations indicate several critical puzzles in studies that address these questions. Chief among these, galaxies might have started forming stars earlier than previously thought (<400Myr after the Big Bang) and their star formation history differs from what is predicted from simulations. Furthermore, the details of the mechanisms that regulate star formation and morphological transformation in dense environments are still unknown. To solve these puzzles of galaxy evolution, we will use 41 galaxy clusters from the RELICS program (Reionization Lensing Cluster Survey) that are among the most powerful cosmic telescopes. Their magnification will allow us to study stellar properties of a large number of galaxies all the way to the reionization era. Accurate knowledge of stellar masses, ages, and star formation rates (SFRs) requires measuring both rest-frame UV and optical light, which only Spitzer can probe at z>0.5-11 for a sufficiently large sample of typical galaxies. This program will combine Spitzer imaging from two large programs, Director Discretionary Time (DDT) and the SRELICS program led by the PI.The main challenge in a study such as this is the capability to perform reliable photometry in crowded fields. Our team recently helped develop TPHOT, which is a much improved and much faster version of previously available codes. TPHOT is specifically designed to extract fluxes in crowded fields with very different PSFs. We will combine Spitzer photometry with ground based imaging and spectroscopy to obtain robust measurements of galaxy star formation rates, stellar masses, and stellar ages. This program will be a crucial legacy complement to previous Spitzer/IRAC deep blank field surveys and cluster studies, and will open up new parameter space by probing intrinsically fainter objects than most current surveys with a significantly improved sample variance over deep field surveys. It will allow us to study the properties (e.g. SFRs and stellar masses) of a large number of galaxies (200 at z=6-10), thus meeting our goal of reconstructing the cosmic SFR density with sufficient precision to better understand the role of galaxies in the reionization process. We will measure the presence (or absence) of established stellar populations with Spitzer for the largest sample to date. Furthermore this proposal will allow us to study the SFRs of the intrinsically faint (and magnified) intermediate redshift (z 4) galaxies, as well as the stellar mass function of z=0.3-0.7 galaxy members of our cluster sample, thereby expanding our understanding of star formation from reionization to the epoch of galaxy formation and dense environments. Many of the science goals of this proposal are main science drivers for JWST. Due to magnification our effective depth and resolution match those of the JWST blank fields and affords us a sneak preview of JWST sources with Spitzer now. This program will thus provide a valuable test-bed for simulations, observation planning and source selection just in time for JWST Cycle 1.

  3. Global Hopf bifurcation analysis on a BAM neural network with delays

    NASA Astrophysics Data System (ADS)

    Sun, Chengjun; Han, Maoan; Pang, Xiaoming

    2007-01-01

    A delayed differential equation that models a bidirectional associative memory (BAM) neural network with four neurons is considered. By using a global Hopf bifurcation theorem for FDE and a Bendixon's criterion for high-dimensional ODE, a group of sufficient conditions for the system to have multiple periodic solutions are obtained when the sum of delays is sufficiently large.

  4. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  5. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    PubMed Central

    Feist, Peter; Hummon, Amanda B.

    2015-01-01

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860

  6. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  7. Surface film effects on drop tube undercooling studies

    NASA Technical Reports Server (NTRS)

    Ethridge, E. C.; Kaukler, W. F.

    1986-01-01

    The effects of various gaseous atmospheric constituents on drop-tube solidified samples of elemental metals were examined from a microstructural standpoint. All specimens were prepared from the purest available elements, so effects of impurities should not account for the observed effects. The drop-tube gas has a definite effect on the sample microstructure. Most dramatically, the sample cooling rate is effected. Some samples receive sufficient cooling to solidify in free fall while others do not, splating at the end of the drop tube in the sample catcher. Gases are selectively absorbed into the sample. Upon solidification gas can become less soluble and as a result forms voids within the sample. The general oxidation/reduction characteristics of the gas also affect sample microstructures. In general, under the more favorable experimental conditions including reducing atmospheric conditions and superheatings, examination of sample microstructures indicates that nucleation has been suppressed. This is indicated by underlying uniform dendrite spacings throughout the sample and with a single dendrite orientation through most of the sample. The samples were annealed yielding a few large grains and single or bi-crystal samples were commonly formed. This was especially true of samples that were inadvertently greatly superheated. This is in contrast with results from a previous study in which surface oxides were stable and contained numerous sites of nucleation. The number of nucleation events depends upon the surface state of the specimen as determined by the atmosphere and is consistent with theoretical expectations based upon the thermodynamic stability of surface oxide films. Oxide-free specimens are characterized by shiny surfaces, with no observable features under the scanning electron microscope at 5000X.

  8. Large-scale Isolation of Highly Pure "Untouched" Regulatory T Cells in a GMP Environment for Adoptive Cell Therapy.

    PubMed

    Haase, Doreen; Puan, Kia Joo; Starke, Mireille; Lai, Tuck Siong; Soh, Melissa Yan Ling; Karunanithi, Iyswariya; San Luis, Boris; Poh, Tuang Yeow; Yusof, Nurhashikin; Yeap, Chun Hsien; Phang, Chew Yen; Chye, Willis Soon Yuan; Chan, Marieta; Koh, Mickey Boon Chai; Goh, Yeow Tee; Bertin-Maghit, Sebastien; Nardin, Alessandra; Ho, Liam Pock; Rotzschke, Olaf

    2015-01-01

    Adoptive cell therapy is an emerging treatment strategy for a number of serious diseases. Regulatory T (Treg) cells represent 1 cell type of particular interest for therapy of inflammatory conditions, as they are responsible for controlling unwanted immune responses. Initial clinical trials of adoptive transfer of Treg cells in patients with graft-versus-host disease were shown to be safe. However, obtaining sufficient numbers of highly pure and functional Treg cells with minimal contamination remains a challenge. We developed a novel approach to isolate "untouched" human Treg cells from healthy donors on the basis of negative selection using the surface markers CD49d and CD127. This procedure, which uses an antibody cocktail and magnetic beads for separation in an automated system (RoboSep), was scaled up and adapted to be compatible with good manufacturing practice conditions. With this setup we performed 9 Treg isolations from large-scale leukapheresis samples in a good manufacturing practice facility. These runs yielded sufficient numbers of "untouched" Treg cells for immediate use in clinical applications. The cell preparations consisted of viable highly pure FoxP3-positive Treg cells that were functional in suppressing the proliferation of effector T cells. Contamination with CD4 effector T cells was <10%. All other cell types did not exceed 2% in the final product. Remaining isolation reagents were reduced to levels that are considered safe. Treg cells isolated with this procedure will be used in a phase I clinical trial of adoptive transfer into leukemia patients developing graft-versus-host disease after stem cell transplantation.

  9. How many flux towers are enough? How tall is a tower tall enough? How elaborate a scaling is scaling enough?

    NASA Astrophysics Data System (ADS)

    Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.

    2017-12-01

    Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.

  10. Space Geodesy and the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Smalley, Robert; Ellis, Michael A.

    2008-07-01

    One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.

  11. Stability and stabilisation of a class of networked dynamic systems

    NASA Astrophysics Data System (ADS)

    Liu, H. B.; Wang, D. Q.

    2018-04-01

    We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.

  12. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  13. Viscous and Thermal Effects on Hydrodynamic Instability in Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    2000-01-01

    A pulsating form of hydrodynamic instability has recently been shown to arise during the deflagration of liquid propellants in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau, form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that when the burning rate is realistically allowed to depend on temperature as well as pressure, that sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes the pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wavenumbers are sufficiently small. In the present work, this analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wavenumber perturbations, the intrinsic pulsating instability for small wavenumbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.

  14. Recycle of Zirconium from Used Nuclear Fuel Cladding: A Major Element of Waste Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Emory D; DelCul, Guillermo D; Terekhov, Dmitri

    2011-01-01

    Feasibility tests were initiated to determine if the zirconium in commercial used nuclear fuel (UNF) cladding can be recovered in sufficient purity to permit re-use, and if the recovery process can be operated economically. Initial tests are being performed with unirradiated, non-radioactive samples of various types of Zircaloy materials that are used in UNF cladding to develop the recovery process and determine the degree of purification that can be obtained. Early results indicate that quantitative recovery can be accomplished and product contamination with alloy constituents can be controlled sufficiently to meet purification requirements. Future tests with actual radioactive UNF claddingmore » are planned. The objective of current research is to determine the feasibility of recovery and recycle of zirconium from used fuel cladding wastes. Zircaloy cladding, which contains 98+% of hafnium-free zirconium, is the second largest mass, on average {approx}25 wt %, of the components in used U.S. light-water-reactor fuel assemblies. Therefore, recovery and recycle of the zirconium would enable a large reduction in geologic waste disposal for advanced fuel cycles. Current practice is to compact or grout the cladding waste and store it for subsequent disposal in a geologic repository. This paper describes results of initial tests being performed with unirradiated, non-radioactive samples of various types of Zircaloy materials that are used in UNF cladding to develop the recovery process and determine the degree of purification that can be obtained. Future tests with actual radioactive UNF cladding are planned.« less

  15. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  16. Regional Evaluation of Groundwater Age Distributions Using Lumped Parameter Models with Large, Sparse Datasets: Example from the Central Valley, California, USA

    NASA Astrophysics Data System (ADS)

    Jurgens, B. C.; Bohlke, J. K.; Voss, S.; Fram, M. S.; Esser, B.

    2015-12-01

    Tracer-based, lumped parameter models (LPMs) are an appealing way to estimate the distribution of age for groundwater because the cost of sampling wells is often less than building numerical groundwater flow models sufficiently complex to provide groundwater age distributions. In practice, however, tracer datasets are often incomplete because of anthropogenic or terrigenic contamination of tracers, or analytical limitations. While age interpretations using such datsets can have large uncertainties, it may still be possible to identify key parts of the age distribution if LPMs are carefully chosen to match hydrogeologic conceptualization and the degree of age mixing is reasonably estimated. We developed a systematic approach for evaluating groundwater age distributions using LPMs with a large but incomplete set of tracer data (3H, 3Hetrit, 14C, and CFCs) from 535 wells, mostly used for public supply, in the Central Valley, California, USA that were sampled by the USGS for the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment or the USGS National Water Quality Assessment Programs. In addition to mean ages, LPMs gave estimates of unsaturated zone travel times, recharge rates for pre- and post-development groundwater, the degree of age mixing in wells, proportion of young water (<60 yrs), and the depth of the boundary between post-development and predevelopment groundwater throughout the Central Valley. Age interpretations were evaluated by comparing past nitrate trends with LPM predicted trends, and whether the presence or absence of anthropogenic organic compounds was consistent with model results. This study illustrates a practical approach for assessing groundwater age information at a large scale to reveal important characteristics about the age structure of a major aquifer, and of the water supplies being derived from it.

  17. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.

    PubMed

    Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L

    2014-01-01

    As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Optical variability properties of the largest AGN sample observed with Kepler/K2

    NASA Astrophysics Data System (ADS)

    Aranzana, E.; Koerding, E.; Uttley, P.; Scaringi, S.; Steven, B.

    2017-10-01

    We present the first short time-scale ( hours to days) optical variability study of a large sample of Active Galactic Nuclei (AGN) observed with the Kepler/K2 mission. The sample contains 275 AGN observed over four campaigns with ˜30-minute cadence selected from the Million Quasar Catalogue with R magnitude < 19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of the AGN in our sample. The average power-law slope is 2.5±0.5, steeper than the PSDs observed in X-rays, and the rest-frame amplitude variability in the frequency range of 6×10^{-6}-10^{-4} Hz varies from 1-10 % with an average of 2.6 %. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift, but no such correlation with luminosity. We attribute these effects to the known 'bluer when brighter variability of quasars combined with the fixed bandpass of Kepler. This study enables us to distinguish between Seyferts and Blazar and confirm AGN candidates.

  19. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data.

    PubMed

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-07-12

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays.

  20. The Effect of Selected Intervention Tactics on Self-Sufficient Behaviors of the Homeless: An Application of the Theory of Planned Behavior.

    ERIC Educational Resources Information Center

    Moroz, Pauline

    A sample of 24 voluntary participants in a federally funded vocational training and placement program for homeless people in El Paso, Texas, was studied to identify specific interventions that increase self-sufficient behaviors of homeless individuals. Case study data were collected from orientation discussions, career counseling sessions, and…

  1. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  2. Work-life balance and subjective well-being: the mediating role of need fulfilment.

    PubMed

    Gröpel, Peter; Kuhl, Julius

    2009-05-01

    The relationship between work-life balance (WLB) (i.e. the perceived sufficiency of the time available for work and social life) and well-being is well-documented. However, previous research failed to sufficiently explain why this relationship exists. In this research, the hypothesis was tested that a sufficient amount of the time available increases well-being because it facilitates satisfaction of personal needs. Using two separate samples (students and employees), the mediating role of need fulfilment in the relationship between WLB and well-being was supported. The results suggest that perceived sufficiency of the time available for work and social life predicts the level of well-being only if the individual's needs are fulfilled within that time.

  3. New Concepts in the Evaluation of Biodegradation/Persistence of Chemical Substances Using a Microbial Inoculum

    PubMed Central

    Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han

    2011-01-01

    The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143

  4. Stellar Population Properties of Ultracompact Dwarfs in M87: A Mass–Metallicity Correlation Connecting Low-metallicity Globular Clusters and Compact Ellipticals

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-Xin; Puzia, Thomas H.; Peng, Eric W.; Liu, Chengze; Côté, Patrick; Ferrarese, Laura; Duc, Pierre-Alain; Eigenthaler, Paul; Lim, Sungsoon; Lançon, Ariane; Muñoz, Roberto P.; Roediger, Joel; Sánchez-Janssen, Ruben; Taylor, Matthew A.; Yu, Jincheng

    2018-05-01

    We derive stellar population parameters for a representative sample of ultracompact dwarfs (UCDs) and a large sample of massive globular clusters (GCs) with stellar masses ≳ 106 M ⊙ in the central galaxy M87 of the Virgo galaxy cluster, based on model fitting to the Lick-index measurements from both the literature and new observations. After necessary spectral stacking of the relatively faint objects in our initial sample of 40 UCDs and 118 GCs, we obtain 30 sets of Lick-index measurements for UCDs and 80 for GCs. The M87 UCDs have ages ≳ 8 Gyr and [α/Fe] ≃ 0.4 dex, in agreement with previous studies based on smaller samples. The literature UCDs, located in lower-density environments than M87, extend to younger ages and smaller [α/Fe] (at given metallicities) than M87 UCDs, resembling the environmental dependence of the stellar nuclei of dwarf elliptical galaxies (dEs) in the Virgo cluster. The UCDs exhibit a positive mass–metallicity relation (MZR), which flattens and connects compact ellipticals at stellar masses ≳ 108 M ⊙. The Virgo dE nuclei largely follow the average MZR of UCDs, whereas most of the M87 GCs are offset toward higher metallicities for given stellar masses. The difference between the mass–metallicity distributions of UCDs and GCs may be qualitatively understood as a result of their different physical sizes at birth in a self-enrichment scenario or of galactic nuclear cluster star formation efficiency being relatively low in a tidal stripping scenario for UCD formation. The existing observations provide the necessary but not sufficient evidence for tidally stripped dE nuclei being the dominant contributors to the M87 UCDs.

  5. Development of a novel method for unraveling the origin of natron flux used in Roman glass production based on B isotopic analysis via multicollector inductively coupled plasma mass spectrometry.

    PubMed

    Devulder, Veerle; Degryse, Patrick; Vanhaecke, Frank

    2013-12-17

    The provenance of the flux raw material used in the manufacturing of Roman glass is an understudied topic in archaeology. Whether one or multiple sources of natron mineral salts were exploited during this period is still open for debate, largely because of the lack of a good provenance indicator. The flux is the major source of B in Roman glass. Therefore, B isotopic analysis of a sufficiently large collection and variety (origin and age) of such glass samples might give an indication of the number of flux sources used. For this purpose, a method based on acid digestion, chromatographic B isolation and B isotopic analysis using multicollector inductively coupled plasma mass spectrometry was developed. B isolation was accomplished using a combination of strong cation exchange and strong anion exchange chromatography. Although the B fraction was not completely matrix-free, the remaining Sb was shown not to affect the δ(11)B result. The method was validated using obsidian and archaeological glass samples that were stripped of their B content, after which an isotopic reference material with known B isotopic composition was added. Absence of artificial B isotope fractionation was demonstrated, and the total uncertainty was shown to be <2‰. A proof-of-concept application to natron glass samples showed a narrow range of δ(11)B, whereas first results for natron salt samples do show a larger difference in δ(11)B. These results suggest the use of only one natron source or of several sources with similar δ(11)B. This indicates that B isotopic analysis is a promising tool for the provenance determination of this flux raw material.

  6. Stellar mass functions and implications for a variable IMF

    NASA Astrophysics Data System (ADS)

    Bernardi, M.; Sheth, R. K.; Fischer, J.-L.; Meert, A.; Chae, K.-H.; Dominguez-Sanchez, H.; Huertas-Company, M.; Shankar, F.; Vikram, V.

    2018-03-01

    Spatially resolved kinematics of nearby galaxies has shown that the ratio of dynamical to stellar population-based estimates of the mass of a galaxy (M_{*}^JAM/M_{*}) correlates with σe, the light-weighted velocity dispersion within its half-light radius, if M* is estimated using the same initial mass function (IMF) for all galaxies and the stellar mass-to-light ratio within each galaxy is constant. This correlation may indicate that, in fact, the IMF is more bottom-heavy or dwarf-rich for galaxies with large σ. We use this correlation to estimate a dynamical or IMF-corrected stellar mass, M_{*}^{α _{JAM}}, from M* and σe for a sample of 6 × 105 Sloan Digital Sky Survey (SDSS) galaxies for which spatially resolved kinematics is not available. We also compute the `virial' mass estimate k(n,R) R_e σ _R^2/G, where n is the Sérsic index, in the SDSS and ATLAS3D samples. We show that an n-dependent correction must be applied to the k(n, R) values provided by Prugniel & Simien. Our analysis also shows that the shape of the velocity dispersion profile in the ATLAS3D sample varies weakly with n: (σR/σe) = (R/Re)-γ(n). The resulting stellar mass functions, based on M_*^{α _{JAM}} and the recalibrated virial mass, are in good agreement. Using a Fundamental Plane-based observational proxy for σe produces comparable results. The use of direct measurements for estimating the IMF-dependent stellar mass is prohibitively expensive for a large sample of galaxies. By demonstrating that cheaper proxies are sufficiently accurate, our analysis should enable a more reliable census of the mass in stars, especially at high redshift, at a fraction of the cost. Our results are provided in tabular form.

  7. Utility of the Mayo-Portland adaptability inventory-4 for self-reported outcomes in a military sample with traumatic brain injury.

    PubMed

    Kean, Jacob; Malec, James F; Cooper, Douglas B; Bowles, Amy O

    2013-12-01

    To investigate the psychometric properties of the Mayo-Portland Adaptability Inventory-4 (MPAI-4) obtained by self-report in a large sample of active duty military personnel with traumatic brain injury (TBI). Consecutive cohort who completed the MPAI-4 as a part of a larger battery of clinical outcome measures at the time of intake to an outpatient brain injury clinic. Medical center. Consecutively referred sample of active duty military personnel (N=404) who suffered predominantly mild (n=355), but also moderate (n=37) and severe (n=12), TBI. Not applicable. MPAI-4 RESULTS: Initial factor analysis suggested 2 salient dimensions. In subsequent analysis, the ratio of the first and second eigenvalues (6.84:1) and parallel analysis indicated sufficient unidimensionality in 26 retained items. Iterative Rasch analysis resulted in the rescaling of the measure and the removal of 5 additional items for poor fit. The items of the final 21-item Mayo-Portland Adaptability Inventory-military were locally independent, demonstrated monotonically increasing responses, adequately fit the item response model, and permitted the identification of nearly 5 statistically distinct levels of disability in the study population. Slight mistargeting of the population resulted in the global outcome, as measured by the Mayo-Portland Adaptability Inventory-military, tending to be less reflective of very mild levels of disability. These data collected in a relatively large sample of active duty service members with TBI provide insight into the ability of patients to self-report functional impairment and the distinct effects of military deployment on outcome, providing important guidance for the meaningful measurement of outcome in this population. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  8. Calculation of the Local Free Energy Landscape in the Restricted Region by the Modified Tomographic Method.

    PubMed

    Chen, Changjun

    2016-03-31

    The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.

  9. A laser-deposition approach to compositional-spread discovery of materials on conventional sample sizes

    NASA Astrophysics Data System (ADS)

    Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  10. The effects of environmental variability and spatial sampling on the three-dimensional inversion problem.

    PubMed

    Bender, Christopher M; Ballard, Megan S; Wilson, Preston S

    2014-06-01

    The overall goal of this work is to quantify the effects of environmental variability and spatial sampling on the accuracy and uncertainty of estimates of the three-dimensional ocean sound-speed field. In this work, ocean sound speed estimates are obtained with acoustic data measured by a sparse autonomous observing system using a perturbative inversion scheme [Rajan, Lynch, and Frisk, J. Acoust. Soc. Am. 82, 998-1017 (1987)]. The vertical and horizontal resolution of the solution depends on the bandwidth of acoustic data and on the quantity of sources and receivers, respectively. Thus, for a simple, range-independent ocean sound speed profile, a single source-receiver pair is sufficient to estimate the water-column sound-speed field. On the other hand, an environment with significant variability may not be fully characterized by a large number of sources and receivers, resulting in uncertainty in the solution. This work explores the interrelated effects of environmental variability and spatial sampling on the accuracy and uncertainty of the inversion solution though a set of case studies. Synthetic data representative of the ocean variability on the New Jersey shelf are used.

  11. Osteophagia provide giraffes with phosphorus and calcium?

    PubMed

    Bredin, I P; Skinner, J D; Mitchell, G

    2008-03-01

    The daily requirement for calcium and phosphorus by giraffes to sustain the growth and maintenance of their skeletons is large. The source of sufficient calcium is browse. The source of necessary phosphorus is obscure, but it could be osteophagia, a frequently observed behaviour in giraffes. We have assessed whether bone ingested as a result of osteophagia can be digested in the rumen. Bone samples from cancellous (cervical vertebrae) and dense bones (metacarpal shaft) were immersed in the rumens of five sheep, for a period of up to 30 days, and the effect compared to immersion in distilled water and in artificial saliva for 30 days. Distilled water had no effect on the bones. Dense bone samples were softened by exposure to the saliva and rumen fluid, but did not lose either calcium or phosphorus. In saliva and rumen fluid the cancellous bone samples also softened, and their mass and volume decreased as a result of exposure to saliva, but in neither fluid did they lose significant amounts of calcium and phosphorus. We conclude that although saliva and rumen fluid can soften ingested bones, there is an insignificant digestion of bones in the rumen.

  12. An effective method of UV-oxidation of dissolved organic carbon in natural waters for radiocarbon analysis by accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Xue, Yuejun; Ge, Tiantian; Wang, Xuchen

    2015-12-01

    Radiocarbon (14C) measurement of dissolved organic carbon (DOC) is a very powerful tool to study the sources, transformation and cycling of carbon in the ocean. The technique, however, remains great challenges for complete and successful oxidation of sufficient DOC with low blanks for high precision carbon isotopic ratio analysis, largely due to the overwhelming proportion of salts and low DOC concentrations in the ocean. In this paper, we report an effective UV-Oxidation method for oxidizing DOC in natural waters for radiocarbon analysis by accelerator mass spectrometry (AMS). The UV-oxidation system and method show 95%±4% oxidation efficiency and high reproducibility for DOC in both river and seawater samples. The blanks associated with the method was also low (about 3 µg C) that is critical for 14C analysis. As a great advantage of the method, multiple water samples can be oxidized at the same time so it reduces the sample processing time substantially compared with other UV-oxidation method currently being used in other laboratories. We have used the system and method for 14C studies of DOC in rivers, estuaries, and oceanic environments and have received promise results.

  13. Carbon-14 bioassay for decommissioning of Hanford reactors.

    PubMed

    Carbaugh, Eugene H; Watson, David J

    2012-05-01

    The production reactors at the U.S. Department of Energy Hanford Site used large graphite piles as the moderator. As part of long-term decommissioning plans, the potential need for ¹⁴C radiobioassay of workers was identified. Technical issues associated with ¹⁴C bioassay and worker monitoring were investigated, including anticipated graphite characterization, potential intake scenarios, and the bioassay capabilities that may be required to support the decommissioning of the graphite piles. A combination of urine and feces sampling would likely be required for the absorption type S ¹⁴C anticipated to be encountered. However, the concentrations in the graphite piles appear to be sufficiently low that dosimetrically significant intakes of ¹⁴C are not credible, thus rendering moot the need for such bioassay.

  14. Carbon-14 Bioassay for Decommissioning of Hanford Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, Eugene H.; Watson, David J.

    2012-05-01

    The old production reactors at the US Department of Energy Hanford Site used large graphite piles as the moderator. As part of long-term decommissioning plans, the potential need for 14C radiobioassay of workers was identified. Technical issues associated with 14C bioassay and worker monitoring were investigated, including anticipated graphite characterization, potential intake scenarios, and the bioassay capabilities that may be required to support the decommissioning of the graphite piles. A combination of urine and feces sampling would likely be required for the absorption type S 14C anticipated to be encountered. However the concentrations in the graphite piles appear to bemore » sufficiently low that dosimetrically significant intakes of 14C are not credible, thus rendering moot the need for such bioassay.« less

  15. Non-targeted metabolomics and lipidomics LC-MS data from maternal plasma of 180 healthy pregnant women.

    PubMed

    Luan, Hemi; Meng, Nan; Liu, Ping; Fu, Jin; Chen, Xiaomin; Rao, Weiqiao; Jiang, Hui; Xu, Xun; Cai, Zongwei; Wang, Jun

    2015-01-01

    Metabolomics has the potential to be a powerful and sensitive approach for investigating the low molecular weight metabolite profiles present in maternal fluids and their role in pregnancy. In this Data Note, LC-MS metabolome, lipidome and carnitine profiling data were collected from 180 healthy pregnant women, representing six time points spanning all three trimesters, and providing sufficient coverage to model the progression of normal pregnancy. As a relatively large scale, real-world dataset with robust numbers of quality control samples, the data are expected to prove useful for algorithm optimization and development, with the potential to augment studies into abnormal pregnancy. All data and ISA-TAB format enriched metadata are available for download in the MetaboLights and GigaScience databases.

  16. Metaresearch for Evaluating Reproducibility in Ecology and Evolution.

    PubMed

    Fidler, Fiona; Chee, Yung En; Wintle, Bonnie C; Burgman, Mark A; McCarthy, Michael A; Gordon, Ascelin

    2017-03-01

    Recent replication projects in other disciplines have uncovered disturbingly low levels of reproducibility, suggesting that those research literatures may contain unverifiable claims. The conditions contributing to irreproducibility in other disciplines are also present in ecology. These include a large discrepancy between the proportion of "positive" or "significant" results and the average statistical power of empirical research, incomplete reporting of sampling stopping rules and results, journal policies that discourage replication studies, and a prevailing publish-or-perish research culture that encourages questionable research practices. We argue that these conditions constitute sufficient reason to systematically evaluate the reproducibility of the evidence base in ecology and evolution. In some cases, the direct replication of ecological research is difficult because of strong temporal and spatial dependencies, so here, we propose metaresearch projects that will provide proxy measures of reproducibility.

  17. Magnetic field generation by pointwise zero-helicity three-dimensional steady flow of an incompressible electrically conducting fluid

    NASA Astrophysics Data System (ADS)

    Rasskazov, Andrey; Chertovskih, Roman; Zheligovsky, Vladislav

    2018-04-01

    We introduce six families of three-dimensional space-periodic steady solenoidal flows, whose kinetic helicity density is zero at any point. Four families are analytically defined. Flows in four families have zero helicity spectrum. Sample flows from five families are used to demonstrate numerically that neither zero kinetic helicity density nor zero helicity spectrum prohibit generation of large-scale magnetic field by the two most prominent dynamo mechanisms: the magnetic α -effect and negative eddy diffusivity. Our computations also attest that such flows often generate small-scale field for sufficiently small magnetic molecular diffusivity. These findings indicate that kinetic helicity and helicity spectrum are not the quantities controlling the dynamo properties of a flow regardless of whether scale separation is present or not.

  18. Possibilities for serial femtosecond crystallography sample delivery at future light sourcesa)

    PubMed Central

    Chavas, L. M. G.; Gumprecht, L.; Chapman, H. N.

    2015-01-01

    Serial femtosecond crystallography (SFX) uses X-ray pulses from free-electron laser (FEL) sources that can outrun radiation damage and thereby overcome long-standing limits in the structure determination of macromolecular crystals. Intense X-ray FEL pulses of sufficiently short duration allow the collection of damage-free data at room temperature and give the opportunity to study irreversible time-resolved events. SFX may open the way to determine the structure of biological molecules that fail to crystallize readily into large well-diffracting crystals. Taking advantage of FELs with high pulse repetition rates could lead to short measurement times of just minutes. Automated delivery of sample suspensions for SFX experiments could potentially give rise to a much higher rate of obtaining complete measurements than at today's third generation synchrotron radiation facilities, as no crystal alignment or complex robotic motions are required. This capability will also open up extensive time-resolved structural studies. New challenges arise from the resulting high rate of data collection, and in providing reliable sample delivery. Various developments for fully automated high-throughput SFX experiments are being considered for evaluation, including new implementations for a reliable yet flexible sample environment setup. Here, we review the different methods developed so far that best achieve sample delivery for X-ray FEL experiments and present some considerations towards the goal of high-throughput structure determination with X-ray FELs. PMID:26798808

  19. Survival of Salmonella and Staphylococcus aureus in mexican red salsa in a food service setting.

    PubMed

    Franco, Wendy; Hsu, Wei-Yea; Simonne, Amarat H

    2010-06-01

    Mexican red salsa is one of the most common side dishes in Mexican cuisine. According to data on foodborne illnesses collected by the Centers for Disease Control and Prevention, salsa was associated with 70 foodborne illness outbreaks between 1990 and 2006. Salsa ingredients such as tomatoes, cilantro, and onions often have been implicated in foodborne illness outbreaks. Mexican-style restaurants commonly prepare a large batch of red salsa, store it at refrigeration temperatures, and then serve it at room temperature. Salmonella is one of the top etiologies in foodborne illness outbreaks associated with salsa, and our preliminary studies revealed the consistent presence of Staphylococcus aureus in restaurant salsa. In the present study, we evaluated the survival of Salmonella Enteritidis and S. aureus inoculated into restaurant-made salsa samples stored at ambient (20 degrees C) and refrigeration (4 degrees C) temperatures. These test temperature conditions represent best-case and worst-case scenarios in restaurant operations. Salmonella survived in all samples stored at room temperature, but S. aureus populations significantly decreased after 24 h of storage at room temperature. No enterotoxin was detected in samples inoculated with S. aureus at 6.0 log CFU/g. Both microorganisms survived longer in refrigerated samples than in samples stored at room temperature. Overall, both Salmonella and S. aureus survived a sufficient length of time in salsa to pose a food safety risk.

  20. Genus-Specific Primers for Study of Fusarium Communities in Field Samples

    PubMed Central

    Edel-Hermann, Véronique; Gautheron, Nadine; Durling, Mikael Brandström; Kolseth, Anna-Karin; Steinberg, Christian; Persson, Paula; Friberg, Hanna

    2015-01-01

    Fusarium is a large and diverse genus of fungi of great agricultural and economic importance, containing many plant pathogens and mycotoxin producers. To date, high-throughput sequencing of Fusarium communities has been limited by the lack of genus-specific primers targeting regions with high discriminatory power at the species level. In the present study, we evaluated two Fusarium-specific primer pairs targeting translation elongation factor 1 (TEF1). We also present the new primer pair Fa+7/Ra+6. Mock Fusarium communities reflecting phylogenetic diversity were used to evaluate the accuracy of the primers in reflecting the relative abundance of the species. TEF1 amplicons were subjected to 454 high-throughput sequencing to characterize Fusarium communities. Field samples from soil and wheat kernels were included to test the method on more-complex material. For kernel samples, a single PCR was sufficient, while for soil samples, nested PCR was necessary. The newly developed primer pairs Fa+7/Ra+6 and Fa/Ra accurately reflected Fusarium species composition in mock DNA communities. In field samples, 47 Fusarium operational taxonomic units were identified, with the highest Fusarium diversity in soil. The Fusarium community in soil was dominated by members of the Fusarium incarnatum-Fusarium equiseti species complex, contradicting findings in previous studies. The method was successfully applied to analyze Fusarium communities in soil and plant material and can facilitate further studies of Fusarium ecology. PMID:26519387

  1. Lunar placement of Mars quarantine facility

    NASA Technical Reports Server (NTRS)

    Davidson, James E.; Mitchell, W. F.

    1988-01-01

    Advanced mission scenarios are currently being contemplated that would call for the retrieval of surface samples from Mars, from a comet, and from other places in the solar system. An important consideration for all of these sample return missions is quarantine. Quarantine facilities on the Moon offer unique advantages over other locations. The Moon offers gravity, distance, and vacuum. It is sufficiently near the Earth to allow rapid resupply and easy communication. It is sufficiently distant to lessen the psychological impact of a quarantine facility on Earth's human inhabitants. Finally, the Moon is airless, and seems to be devoid of life. It is, therefore, more suited to contamination control efforts.

  2. DEVELOPMENT OF STANDARDIZED LARGE RIVER BIOASSESSMENT PROTOCOLS (LR-BP) FOR FISH ASSEMBLAGES

    EPA Science Inventory

    We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages for large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable ri...

  3. 50 CFR 679.93 - Amendment 80 Program recordkeeping, permits, monitoring, and catch accounting.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED... moved to the fish bin. (6) Sample storage. There is sufficient space to accommodate a minimum of 10 observer sampling baskets. This space must be within or adjacent to the observer sample station. (7) Pre...

  4. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  5. DEVELOPMENT OF STANDARDIZED LARGE RIVER BIOASSESSMENT PROTOCOLS (LR-BP) FOR BENTHIC MACROINVERTEBRATE ASSEMBLAGES

    EPA Science Inventory

    We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages of large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable riv...

  6. Responses of large mammals to climate change.

    PubMed

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.

  7. Responses of large mammals to climate change

    PubMed Central

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293

  8. THE CHALLENGE OF DETECTING CLASSICAL SWINE FEVER VIRUS CIRCULATION IN WILD BOAR (SUS SCROFA): SIMULATION OF SAMPLING OPTIONS.

    PubMed

    Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph

    2016-10-01

    Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.

  9. Moving into a new era of periodontal genetic studies: relevance of large case-control samples using severe phenotypes for genome-wide association studies.

    PubMed

    Vaithilingam, R D; Safii, S H; Baharuddin, N A; Ng, C C; Cheong, S C; Bartold, P M; Schaefer, A S; Loos, B G

    2014-12-01

    Studies to elucidate the role of genetics as a risk factor for periodontal disease have gone through various phases. In the majority of cases, the initial 'hypothesis-dependent' candidate-gene polymorphism studies did not report valid genetic risk loci. Following a large-scale replication study, these initially positive results are believed to be caused by type 1 errors. However, susceptibility genes, such as CDKN2BAS (Cyclin Dependend KiNase 2B AntiSense RNA; alias ANRIL [ANtisense Rna In the Ink locus]), glycosyltransferase 6 domain containing 1 (GLT6D1) and cyclooxygenase 2 (COX2), have been reported as conclusive risk loci of periodontitis. The search for genetic risk factors accelerated with the advent of 'hypothesis-free' genome-wide association studies (GWAS). However, despite many different GWAS being performed for almost all human diseases, only three GWAS on periodontitis have been published - one reported genome-wide association of GLT6D1 with aggressive periodontitis (a severe phenotype of periodontitis), whereas the remaining two, which were performed on patients with chronic periodontitis, were not able to find significant associations. This review discusses the problems faced and the lessons learned from the search for genetic risk variants of periodontitis. Current and future strategies for identifying genetic variance in periodontitis, and the importance of planning a well-designed genetic study with large and sufficiently powered case-control samples of severe phenotypes, are also discussed. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Baseline integrated behavioural and biological assessment among most at-risk populations in six high-prevalence states of India: design and implementation challenges.

    PubMed

    Saidel, Tobi; Adhikary, Rajatashuvra; Mainkar, Mandar; Dale, Jayesh; Loo, Virginia; Rahman, Motiur; Ramesh, Banadakoppa M; Paranjape, Ramesh S

    2008-12-01

    This paper presents key methodological approaches and challenges in implementing and analysing the first round of the integrated biobehavioural assessment of most-at-risk populations, conducted in conjunction with evaluation of Avahan, the India AIDS initiative. The survey collected data on HIV risk behaviours, sexually transmitted infections and HIV prevalence in 29 districts in six high-prevalence states of India. Groups included female sex workers and clients, men who have sex with men, injecting drug users and truck drivers. Strategies for overcoming some challenges of the large-scale surveys among vulnerable populations, including sampling hidden populations, involvement of the communities targeted by the survey, laboratory and quality control in remote, non-clinic field settings, and data analysis and data use are presented. Satisfying the need for protocols, guidelines and tools that allowed for sufficient standardization, while being tailored enough to fit diverse local situations on such a large scale, with so many implementing partners, emerged as a major management challenge. A major lesson from the first round is the vital importance of investing upfront time in tailoring the sampling methods, data collection instruments, and analysis plan to match measurement objectives. Despite the challenges, the integrated biobehavioural assessment was a huge achievement, and was largely successful in providing previously unavailable information about the HIV situation among populations that are critical to the curtailment of HIV spread in India. Lessons from the first round will be used to evolve the second round into an exercise with increased evaluative capability for Avahan.

  12. An improved taxonomic sampling is a necessary but not sufficient condition for resolving inter-families relationships in Caridean decapods.

    PubMed

    Aznar-Cormano, L; Brisset, J; Chan, T-Y; Corbari, L; Puillandre, N; Utge, J; Zbinden, M; Zuccon, D; Samadi, S

    2015-04-01

    During the past decade, a large number of multi-gene analyses aimed at resolving the phylogenetic relationships within Decapoda. However relationships among families, and even among sub-families, remain poorly defined. Most analyses used an incomplete and opportunistic sampling of species, but also an incomplete and opportunistic gene selection among those available for Decapoda. Here we test in the Caridea if improving the taxonomic coverage following the hierarchical scheme of the classification, as it is currently accepted, provides a better phylogenetic resolution for the inter-families relationships. The rich collections of the Muséum National d'Histoire Naturelle de Paris are used for sampling as far as possible at least two species of two different genera for each family or subfamily. All potential markers are tested over this sampling. For some coding genes the amplification success varies greatly among taxa and the phylogenetic signal is highly saturated. This result probably explains the taxon-heterogeneity among previously published studies. The analysis is thus restricted to the genes homogeneously amplified over the whole sampling. Thanks to the taxonomic sampling scheme the monophyly of most families is confirmed. However the genes commonly used in Decapoda appear non-adapted for clarifying inter-families relationships, which remain poorly resolved. Genome-wide analyses, like transcriptome-based exon capture facilitated by the new generation sequencing methods might provide a sounder approach to resolve deep and rapid radiations like the Caridea.

  13. Strong signal increase in STED fluorescence microscopy by imaging regions of subdiffraction extent

    PubMed Central

    Göttfert, Fabian; Pleiner, Tino; Heine, Jörn; Westphal, Volker; Görlich, Dirk; Sahl, Steffen J.; Hell, Stefan W.

    2017-01-01

    Photobleaching remains a limiting factor in superresolution fluorescence microscopy. This is particularly true for stimulated emission depletion (STED) and reversible saturable/switchable optical fluorescence transitions (RESOLFT) microscopy, where adjacent fluorescent molecules are distinguished by sequentially turning them off (or on) using a pattern of light formed as a doughnut or a standing wave. In sample regions where the pattern intensity reaches or exceeds a certain threshold, the molecules are essentially off (or on), whereas in areas where the intensity is lower, that is, around the intensity minima, the molecules remain in the initial state. Unfortunately, the creation of on/off state differences on subdiffraction scales requires the maxima of the intensity pattern to exceed the threshold intensity by a large factor that scales with the resolution. Hence, when recording an image by scanning the pattern across the sample, each molecule in the sample is repeatedly exposed to the maxima, which exacerbates bleaching. Here, we introduce MINFIELD, a strategy for fundamentally reducing bleaching in STED/RESOLFT nanoscopy through restricting the scanning to subdiffraction-sized regions. By safeguarding the molecules from the intensity of the maxima and exposing them only to the lower intensities (around the minima) needed for the off-switching (on-switching), MINFIELD largely avoids detrimental transitions to higher molecular states. A bleaching reduction by up to 100-fold is demonstrated. Recording nanobody-labeled nuclear pore complexes in Xenopus laevis cells showed that MINFIELD-STED microscopy resolved details separated by <25 nm where conventional scanning failed to acquire sufficient signal. PMID:28193881

  14. The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS

    NASA Astrophysics Data System (ADS)

    Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin

    A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.

  15. Gender-Specific Barriers to Self-Sufficiency among Former Supplemental Security Income Drug Addiction and Alcoholism Beneficiaries: Implications for Welfare-To-Work Programs and Services

    PubMed Central

    Hogan, Sean R; Unick, George J.; Speiglman, Richard; Norris, Jean C.

    2011-01-01

    This study examines barriers to economic self-sufficiency among a panel of 219 former Supplemental Security Income (SSI) drug addiction and alcoholism (DA&A) recipients following elimination of DA&A as an eligibility category for SSI disability benefits. Study participants were comprehensively surveyed at six measurement points following the policy change. Generalized estimating equations were used to examine full-sample and gender-specific barriers to economic self-sufficiency. Results indicate that access to transportation, age, and time are the strongest predictors of achieving self-sufficiency for both men and women leaving the welfare system. Gender-specific barriers are also identified. Future research needs to assess the generalizability of these results to other public assistance recipients. PMID:21625301

  16. Method for detection of long-lived radioisotopes in small biochemical samples

    DOEpatents

    Turteltaub, K.W.; Vogel, J.S.; Felton, J.S.; Gledhill, B.L.; Davis, J.C.

    1994-11-22

    Disclosed is a method for detection of long-lived radioisotopes in small biochemical samples, comprising: a. selecting a biological host in which radioisotopes are present in concentrations equal to or less than those in the ambient biosphere, b. preparing a long-lived radioisotope labeled reactive chemical specie, c. administering the chemical specie to the biologist host in doses sufficiently low to avoid significant overt damage to the biological system, d. allowing a period of time to elapse sufficient for dissemination and interaction of the chemical specie with the host throughout the biological system of the host, e. isolating a reacted fraction of the biological substance from the host in a manner sufficient to avoid contamination of the substance from extraneous sources, f. converting the fraction of biological substance by suitable means to a material which efficiently produces charged ions in at least one of several possible ion sources without introduction of significant isotopic fractionation, and, g. measuring the radioisotope concentration in the material by means of direct isotopic counting. 5 figs.

  17. Exploration of the factor structure of the Kirton Adaption-Innovation Inventory using bootstrapping estimation.

    PubMed

    Im, Subin; Min, Soonhong

    2013-04-01

    Exploratory factor analyses of the Kirton Adaption-Innovation Inventory (KAI), which serves to measure individual cognitive styles, generally indicate three factors: sufficiency of originality, efficiency, and rule/group conformity. In contrast, a 2005 study by Im and Hu using confirmatory factor analysis supported a four-factor structure, dividing the sufficiency of originality dimension into two subdimensions, idea generation and preference for change. This study extends Im and Hu's (2005) study of a derived version of the KAI by providing additional evidence of the four-factor structure. Specifically, the authors test the robustness of the parameter estimates to the violation of normality assumptions in the sample using bootstrap methods. A bias-corrected confidence interval bootstrapping procedure conducted among a sample of 356 participants--members of the Arkansas Household Research Panel, with middle SES and average age of 55.6 yr. (SD = 13.9)--showed that the four-factor model with two subdimensions of sufficiency of originality fits the data significantly better than the three-factor model in non-normality conditions.

  18. Method for detection of long-lived radioisotopes in small biochemical samples

    DOEpatents

    Turteltaub, Kenneth W.; Vogel, John S.; Felton, James S.; Gledhill, Barton L.; Davis, Jay C.

    1994-01-01

    Disclosed is a method for detection of long-lived radioisotopes in small bio-chemical samples, comprising: a. selecting a biological host in which radioisotopes are present in concentrations equal to or less than those in the ambient biosphere, b. preparing a long-lived radioisotope labeled reactive chemical specie, c. administering said chemical specie to said biologist host in doses sufficiently low to avoid significant overt damage to the biological system thereof, d. allowing a period of time to elapse sufficient for dissemination and interaction of said chemical specie with said host throughout said biological system of said host, e. isolating a reacted fraction of the biological substance from said host in a manner sufficient to avoid contamination of said substance from extraneous sources, f. converting said fraction of biological substance by suitable means to a material which efficiently produces charged ions in at least one of several possible ion sources without introduction of significant isotopic fractionation, and, g. measuring the radioisotope concentration in said material by means of direct isotopic counting.

  19. An innovative piston corer for large‐volume sediment samples

    PubMed Central

    Haselmair, Alexandra; Stachowitsch, Michael; Zuschin, Martin

    2016-01-01

    Abstract Coring is one of several standard procedures to extract sediments and their faunas from open marine, estuarine, and limnic environments. Achieving sufficiently deep penetration, obtaining large sediment volumes in single deployments, and avoiding sediment loss upon retrieval remain problematic. We developed a piston corer with a diameter of 16 cm that enables penetration down to 1.5 m in a broad range of soft bottom types, yields sufficient material for multiple analyses, and prevents sediment loss due to a specially designed hydraulic core catcher. A novel extrusion system enables very precise slicing and preserves the original sediment stratification by keeping the liners upright. The corer has moderate purchase costs and a robust and simple design that allows for a deployment from relatively small vessels as available at most marine science institutions. It can easily be operated by two to three researchers rather than by specially trained technicians. In the northern Adriatic Sea, the corer successfully extracted more than 50 cores from a range of fine mud to coarse sand, at water depths from three to 45 m. The initial evaluation of the cores demonstrated their usefulness for fauna sequences along with heavy metal, nutrient and pollutant analyses. Their length is particularly suited for historical ecological work requiring sedimentary and faunal sequences to reconstruct benthic communities over the last millennia. PMID:28111529

  20. English community pharmacists’ experiences of using electronic transmission of prescriptions: a qualitative study

    PubMed Central

    2013-01-01

    Background The Electronic Prescription Service Release 2 (EPS2) in England has been designed to provide electronic transmission of digitally-signed prescriptions between primary care providers, with the intent on removing the large amounts of paper currently exchanged. As part of a wider evaluation of the EPS service, we wished to explore pharmacists’ experience with the new system and their perceptions of its benefits and any associated problems. Methods We conducted semi-structured telephone interviews with community pharmacists using EPS2. We used a purposive sampling technique to obtain views from pharmacists working in pharmacies with a range of sizes and locations and to include both independent pharmacies and multiples. Interviews were transcribed verbatim and coded using grounded theory to identify the main factors that have influenced deployment and implementation in the eyes of respondents. QSR Nvivo was used as to aid in this process. Results It became apparent from the analysis that respondents perceived a wide range of advantages of EPS including improved safety, stock control, time management and improved relationships between pharmacy and General Practice staff. Respondents did not perceive a large difference in terms of work processes or development of their professional role. A large number of problems had been experienced in relation to both the technology itself and the way it was used by General Practices. It became apparent that work-around procedures had been developed for dealing with these issues but that not all these problems were perceived as having been addressed sufficiently at source. This sometimes had implications for the extent of EPS2 use and also limited some of the potential advantages of the EPS2 system, such as reduced effort in the management of prescription reimbursement. Respondents made suggestions for future improvements to EPS2. While interview data demonstrated that there were some feedback procedures in place, these were not regarded as being sufficient by the majority of respondents. Conclusions Whilst pharmacists perceived a wide range of benefits of EPS, a large number of problems had been experienced. Despite these difficulties, no pharmacists expressed an overall negative view. PMID:24152293

  1. English community pharmacists' experiences of using electronic transmission of prescriptions: a qualitative study.

    PubMed

    Garfield, Sara; Hibberd, Ralph; Barber, Nick

    2013-10-23

    The Electronic Prescription Service Release 2 (EPS2) in England has been designed to provide electronic transmission of digitally-signed prescriptions between primary care providers, with the intent on removing the large amounts of paper currently exchanged. As part of a wider evaluation of the EPS service, we wished to explore pharmacists' experience with the new system and their perceptions of its benefits and any associated problems. We conducted semi-structured telephone interviews with community pharmacists using EPS2. We used a purposive sampling technique to obtain views from pharmacists working in pharmacies with a range of sizes and locations and to include both independent pharmacies and multiples. Interviews were transcribed verbatim and coded using grounded theory to identify the main factors that have influenced deployment and implementation in the eyes of respondents. QSR Nvivo was used as to aid in this process. It became apparent from the analysis that respondents perceived a wide range of advantages of EPS including improved safety, stock control, time management and improved relationships between pharmacy and General Practice staff. Respondents did not perceive a large difference in terms of work processes or development of their professional role. A large number of problems had been experienced in relation to both the technology itself and the way it was used by General Practices. It became apparent that work-around procedures had been developed for dealing with these issues but that not all these problems were perceived as having been addressed sufficiently at source. This sometimes had implications for the extent of EPS2 use and also limited some of the potential advantages of the EPS2 system, such as reduced effort in the management of prescription reimbursement. Respondents made suggestions for future improvements to EPS2. While interview data demonstrated that there were some feedback procedures in place, these were not regarded as being sufficient by the majority of respondents. Whilst pharmacists perceived a wide range of benefits of EPS, a large number of problems had been experienced. Despite these difficulties, no pharmacists expressed an overall negative view.

  2. Effects of short-range order on electronic properties of Zr-Ni glasses as seen from low-temperature specific heat

    NASA Astrophysics Data System (ADS)

    Kroeger, D. M.; Koch, C. C.; Scarbrough, J. O.; McKamey, C. G.

    1984-02-01

    Measurements of the low-temperature specific heat Cp of liquid-quenched Zr-Ni glasses for a large number of compositions in the range from 55 to 74 at.% Zr revealed an unusual composition dependence of the density of states at the Fermi level, N(EF). Furthermore, for some compositions the variation of Cp near the superconducting transition temperature Tc indicated the presence of two superconducting phases, i.e., two superconducting transitions were detected. Comparison of the individual Tc's in phase-separated samples to the composition dependence of Tc for all of the samples suggests that amorphous phases with compositions near 60 and 66.7 at.% Zr occur. We discuss these results in terms of an "association model" for liquid alloys (due to Sommer), in which associations of unlike atoms with definite stoichiometries are assumed to exist in equilibrium with unassociated atoms. We conclude that in the composition range studied, associate clusters with the compositions Zr3Ni2 and Zr2Ni occur. In only a few cases are the clusters sufficiently large, compared with the superconducting coherence length, for separate superconducting transitions to be observed. The variation of N(EF) with composition is discussed, as well as the effects of this chemical short-range ordering on the crystallization behavior and glass-forming tendency.

  3. Zoonotic Babesia microti in the northeastern U.S.: Evidence for the expansion of a specific parasite lineage

    PubMed Central

    Molloy, Philip; Weeks, Karen

    2018-01-01

    The recent range expansion of human babesiosis in the northeastern United States, once found only in restricted coastal sites, is not well understood. This study sought to utilize a large number of samples to examine the population structure of the parasites on a fine scale to provide insights into the mode of emergence across the region. 228 B. microti samples collected in endemic northeastern U.S. sites were genotyped using published Variable number tandem repeat (VNTR) markers. The genetic diversity and population structure were analysed on a geographic scale using Phyloviz and TESS, programs that utilize two different methods to identify population membership without predefined population data. Three distinct populations were detected in northeastern US, each dominated by a single ancestral type. In contrast to the limited range of the Nantucket and Cape Cod populations, the mainland population dominated from New Jersey eastward to Boston. Ancestral populations of B. microti were sufficiently isolated to differentiate into distinct populations. Despite this, a single population was detected across a large geographic area of the northeast that historically had at least 3 distinct foci of transmission, central New Jersey, Long Island and southeastern Connecticut. We conclude that a single B. microti genotype has expanded across the northeastern U.S. The biological attributes associated with this parasite genotype that have contributed to such a selective sweep remain to be identified. PMID:29565993

  4. Analysis of potential protein-modifying variants in 9000 endometriosis patients and 150000 controls of European ancestry.

    PubMed

    Sapkota, Yadav; Vivo, Immaculata De; Steinthorsdottir, Valgerdur; Fassbender, Amelie; Bowdler, Lisa; Buring, Julie E; Edwards, Todd L; Jones, Sarah; O, Dorien; Peterse, Daniëlle; Rexrode, Kathryn M; Ridker, Paul M; Schork, Andrew J; Thorleifsson, Gudmar; Wallace, Leanne M; Kraft, Peter; Morris, Andrew P; Nyholt, Dale R; Edwards, Digna R Velez; Nyegaard, Mette; D'Hooghe, Thomas; Chasman, Daniel I; Stefansson, Kari; Missmer, Stacey A; Montgomery, Grant W

    2017-09-12

    Genome-wide association (GWA) studies have identified 19 independent common risk loci for endometriosis. Most of the GWA variants are non-coding and the genes responsible for the association signals have not been identified. Herein, we aimed to assess the potential role of protein-modifying variants in endometriosis using exome-array genotyping in 7164 cases and 21005 controls, and a replication set of 1840 cases and 129016 controls of European ancestry. Results in the discovery sample identified significant evidence for association with coding variants in single-variant (rs1801232-CUBN) and gene-level (CIITA and PARP4) meta-analyses, but these did not survive replication. In the combined analysis, there was genome-wide significant evidence for rs13394619 (P = 2.3 × 10 -9 ) in GREB1 at 2p25.1 - a locus previously identified in a GWA meta-analysis of European and Japanese samples. Despite sufficient power, our results did not identify any protein-modifying variants (MAF > 0.01) with moderate or large effect sizes in endometriosis, although these variants may exist in non-European populations or in high-risk families. The results suggest continued discovery efforts should focus on genotyping large numbers of surgically-confirmed endometriosis cases and controls, and/or sequencing high-risk families to identify novel rare variants to provide greater insights into the molecular pathogenesis of the disease.

  5. Ray tracing method for the evaluation of grazing incidence x-ray telescopes described by spatially sampled surfaces.

    PubMed

    Yu, Jun; Shen, Zhengxiang; Sheng, Pengfeng; Wang, Xiaoqiang; Hailey, Charles J; Wang, Zhanshan

    2018-03-01

    The nested grazing incidence telescope can achieve a large collecting area in x-ray astronomy, with a large number of closely packed, thin conical mirrors. Exploiting the surface metrological data, the ray tracing method used to reconstruct the shell surface topography and evaluate the imaging performance is a powerful tool to assist iterative improvement in the fabrication process. However, current two-dimensional (2D) ray tracing codes, especially when utilized with densely sampled surface shape data, may not provide sufficient accuracy of reconstruction and are computationally cumbersome. In particular, 2D ray tracing currently employed considers coplanar rays and thus simulates only these rays along the meridional plane. This captures axial figure errors but leaves other important errors, such as roundness errors, unaccounted for. We introduce a semianalytic, three-dimensional (3D) ray tracing approach for x-ray optics that overcomes these shortcomings. And the present method is both computationally fast and accurate. We first introduce the principles and the computational details of this 3D ray tracing method. Then the computer simulations of this approach compared to 2D ray tracing are demonstrated, using an ideal conic Wolter-I telescope for benchmarking. Finally, the present 3D ray tracing is used to evaluate the performance of a prototype x-ray telescope fabricated for the enhanced x-ray timing and polarization mission.

  6. Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?

    ERIC Educational Resources Information Center

    Gregg, Brent A.; Sawyer, Jean

    2015-01-01

    The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…

  7. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  8. High-Grading Lunar Samples for Return to Earth

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Winterhalter, Daniel; Farmer, Jack

    2009-01-01

    Astronauts on long-duration lunar missions will need the capability to "high-grade" their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to defile the "necessary and sufficient" measurements and techniques for highgrading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory.

  9. Mechanism of explosive eruptions of Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Dvorak, J.J.

    1992-01-01

    A small explosive eruption of Kilauea Volcano, Hawaii, occurred in May 1924. The eruption was preceded by rapid draining of a lava lake and transfer of a large volume of magma from the summit reservoir to the east rift zone. This lowered the magma column, which reduced hydrostatic pressure beneath Halemaumau and allowed groundwater to flow rapidly into areas of hot rock, producing a phreatic eruption. A comparison with other events at Kilauea shows that the transfer of a large volume of magma out of the summit reservoir is not sufficient to produce a phreatic eruption. For example, the volume transferred at the beginning of explosive activity in May 1924 was less than the volumes transferred in March 1955 and January-February 1960, when no explosive activity occurred. Likewise, draining of a lava lake and deepening of the floor of Halemaumau, which occurred in May 1922 and August 1923, were not sufficient to produce explosive activity. A phreatic eruption of Kilauea requires both the transfer of a large volume of magma from the summit reservoir and the rapid removal of magma from near the surface, where the surrounding rocks have been heated to a sufficient temperature to produce steam explosions when suddenly contacted by groundwater. ?? 1992 Springer-Verlag.

  10. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.

  11. Metabolic rates of giant pandas inform conservation strategies.

    PubMed

    Fei, Yuxiang; Hou, Rong; Spotila, James R; Paladino, Frank V; Qi, Dunwu; Zhang, Zhihe

    2016-06-06

    The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.

  12. Metabolic rates of giant pandas inform conservation strategies

    NASA Astrophysics Data System (ADS)

    Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe

    2016-06-01

    The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.

  13. Metabolic rates of giant pandas inform conservation strategies

    PubMed Central

    Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe

    2016-01-01

    The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction. PMID:27264109

  14. Psychometric properties of the Sexual Excitation/Sexual Inhibition Inventory for Women and Men (SESII-W/M) and the Sexual Excitation Scales/Sexual Inhibition Scales short form (SIS/SES-SF) in a population-based sample in Germany

    PubMed Central

    Scholten, Saskia; Margraf, Jürgen

    2018-01-01

    The Sexual Excitation Sexual/Inhibition Inventory for Women and Men (SESII-W/M) and the Sexual Excitation Scales/Sexual Inhibition Scales short form (SIS/SES-SF) are two self-report questionnaires for assessing sexual excitation (SE) and sexual inhibition (SI). According to the dual control model of sexual response, SE and SI differ between individuals and influence the occurrence of sexual arousal in given situations. Extreme levels of SE and SI are postulated to be associated with sexual difficulties or risky sexual behaviors. The present study was designed to assess the psychometric properties of the German versions of both questionnaires utilizing a large population-based sample of 2,708 participants (Mage = 51.19, SD = 14.03). Overall, psychometric evaluation of the two instruments yielded good convergent and discriminant validity and mediocre to good internal consistency. The original 30-item version of the SESII-W/M did not show a sufficient model fit. For a 24-item version of the SESII-W/M partial strong measurement invariance across gender, and strong measurement invariance across relationship status, age, and educational levels were established. The original structure (14 items, 3 factors) of the SIS/SES-SF was not replicated. However, a 4-factor model including 13 items showed a good model fit and strong measurement invariance across the before-mentioned participant groups. For both questionnaires, partial strong measurement invariance with the original American versions of the scales was found. As some factors showed unsatisfactory internal consistency and the factor structure of the original scales could not be replicated, scores on several SE- and SI-factors should be interpreted with caution. However, most analyses indicated sufficient psychometric quality of the German SESII-W/M and SIS/SES-SF and their use can be recommended in German-speaking samples. More research with diverse samples (i.e., different sexual orientations, individuals with sexual difficulties) is needed to ensure the replicability of the factor solutions presented in this study. PMID:29529045

  15. Biocompatible, smooth, plasma-treated nickel-titanium surface--an adequate platform for cell growth.

    PubMed

    Chrzanowski, W; Szade, J; Hart, A D; Knowles, J C; Dalby, M J

    2012-02-01

    High nickel content is believed to reduce the number of biomedical applications of nickel-titanium alloy due to the reported toxicity of nickel. The reduction in nickel release and minimized exposure of the cell to nickel can optimize the biocompatibility of the alloy and increase its use in the application where its shape memory effects and pseudoelasticity are particularly useful, e.g., spinal implants. Many treatments have been tried to improve the biocompatibility of Ni-Ti, and results suggest that a native, smooth surface could provide sufficient tolerance, biologically. We hypothesized that the native surface of nickel-titanium supports cell differentiation and insures good biocompatibility. Three types of surface modifications were investigated: thermal oxidation, alkali treatment, and plasma sputtering, and compared with smooth, ground surface. Thermal oxidation caused a drop in surface nickel content, while negligible chemistry changes were observed for plasma-modified samples when compared with control ground samples. In contrast, alkali treatment caused significant increase in surface nickel concentration and accelerated nickel release. Nickel release was also accelerated in thermally oxidized samples at 600 °C, while in other samples it remained at low level. Both thermal oxidation and alkali treatment increased the roughness of the surface, but mean roughness R(a) was significantly greater for the alkali-treated ones. Ground and plasma-modified samples had 'smooth' surfaces with R(a)=4 nm. Deformability tests showed that the adhesion of the surface layers on samples oxidized at 600 °C and alkali treatment samples was not sufficient; the layer delaminated upon deformation. It was observed that the cell cytoskeletons on the samples with a high nickel content or release were less developed, suggesting some negative effects of nickel on cell growth. These effects were observed primarily during initial cell contact with the surface. The most favorable cell responses were observed for ground and plasma-sputtered surfaces. These studies indicated that smooth, plasma-modified surfaces provide sufficient properties for cells to grow. © The Author(s), 2011.

  16. DNA methylation profiling of genomic DNA isolated from urine in diabetic chronic kidney disease: A pilot study

    PubMed Central

    Sexton-Oates, Alexandra; Carmody, Jake; Ekinci, Elif I.; Dwyer, Karen M.; Saffery, Richard

    2018-01-01

    Aim To characterise the genomic DNA (gDNA) yield from urine and quality of derived methylation data generated from the widely used Illuminia Infinium MethylationEPIC (HM850K) platform and compare this with buffy coat samples. Background DNA methylation is the most widely studied epigenetic mark and variations in DNA methylation profile have been implicated in diabetes which affects approximately 415 million people worldwide. Methods QIAamp Viral RNA Mini Kit and QIAamp DNA micro kit were used to extract DNA from frozen and fresh urine samples as well as increasing volumes of fresh urine. Matched buffy coats to the frozen urine were also obtained and DNA was extracted from the buffy coats using the QIAamp DNA Mini Kit. Genomic DNA of greater concentration than 20μg/ml were used for methylation analysis using the HM850K array. Results Irrespective of extraction technique or the use of fresh versus frozen urine samples, limited genomic DNA was obtained using a starting sample volume of 5ml (0–0.86μg/mL). In order to optimize the yield, we increased starting volumes to 50ml fresh urine, which yielded only 0–9.66μg/mL A different kit, QIAamp DNA Micro Kit, was trialled in six fresh urine samples and ten frozen urine samples with inadequate DNA yields from 0–17.7μg/mL and 0–1.6μg/mL respectively. Sufficient genomic DNA was obtained from only 4 of the initial 41 frozen urine samples (10%) for DNA methylation profiling. In comparison, all four buffy coat samples (100%) provided sufficient genomic DNA. Conclusion High quality data can be obtained provided a sufficient yield of genomic DNA is isolated. Despite optimizing various extraction methodologies, the modest amount of genomic DNA derived from urine, may limit the generalisability of this approach for the identification of DNA methylation biomarkers of chronic diabetic kidney disease. PMID:29462136

  17. DNA methylation profiling of genomic DNA isolated from urine in diabetic chronic kidney disease: A pilot study.

    PubMed

    Lecamwasam, Ashani; Sexton-Oates, Alexandra; Carmody, Jake; Ekinci, Elif I; Dwyer, Karen M; Saffery, Richard

    2018-01-01

    To characterise the genomic DNA (gDNA) yield from urine and quality of derived methylation data generated from the widely used Illuminia Infinium MethylationEPIC (HM850K) platform and compare this with buffy coat samples. DNA methylation is the most widely studied epigenetic mark and variations in DNA methylation profile have been implicated in diabetes which affects approximately 415 million people worldwide. QIAamp Viral RNA Mini Kit and QIAamp DNA micro kit were used to extract DNA from frozen and fresh urine samples as well as increasing volumes of fresh urine. Matched buffy coats to the frozen urine were also obtained and DNA was extracted from the buffy coats using the QIAamp DNA Mini Kit. Genomic DNA of greater concentration than 20μg/ml were used for methylation analysis using the HM850K array. Irrespective of extraction technique or the use of fresh versus frozen urine samples, limited genomic DNA was obtained using a starting sample volume of 5ml (0-0.86μg/mL). In order to optimize the yield, we increased starting volumes to 50ml fresh urine, which yielded only 0-9.66μg/mL A different kit, QIAamp DNA Micro Kit, was trialled in six fresh urine samples and ten frozen urine samples with inadequate DNA yields from 0-17.7μg/mL and 0-1.6μg/mL respectively. Sufficient genomic DNA was obtained from only 4 of the initial 41 frozen urine samples (10%) for DNA methylation profiling. In comparison, all four buffy coat samples (100%) provided sufficient genomic DNA. High quality data can be obtained provided a sufficient yield of genomic DNA is isolated. Despite optimizing various extraction methodologies, the modest amount of genomic DNA derived from urine, may limit the generalisability of this approach for the identification of DNA methylation biomarkers of chronic diabetic kidney disease.

  18. Management Plans Technical Appendix - Phase 1 (Central Puget Sound). Volume 4

    DTIC Science & Technology

    1988-06-01

    measure without substantially more samples and analysis or significantly reducing the desired confidence level . Consequently, the study participants...disposal occurs in ac- (9) An application and a lease fee will be charged at a cordance with permit conditions. Compliance measures rate sufficient to...site are sufficient to characterize the material. The bloassays are a cost effective measure of the biological effects of concern within the disposal

  19. How Important Are 'Entry Effects' in Financial Incentive Programs for Welfare Recipients? Experimental Evidence from the Self-Sufficiency Project. SRDC Working Papers.

    ERIC Educational Resources Information Center

    Card, David; Robins, Philip K.; Lin, Winston

    The Self-Sufficiency Project (SSP) entry effect experiment was designed to measure the effect of the future availability of an earnings supplement on the behavior of newly enrolled income assistance (IA) recipients. It used a classical randomized design. From a sample of 3,315 single parents who recently started a new period of IA, one-half were…

  20. Properties of Soil Pore Space Regulate Pathways of Plant Residue Decomposition and Community Structure of Associated Bacteria

    PubMed Central

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.; Marsh, Terence L.; Hildebrandt, Britton; Rivers, Mark L.

    2015-01-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO2 emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis of amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO2 emission constituted 1,200 µm C g-1 soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO2 emission constituted 2,000 µm C g-1 soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO2 emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes. PMID:25909444

  1. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria

    DOE PAGES

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.; ...

    2015-07-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO₂ emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis ofmore » amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO₂ emission constituted 1,200 µm C g⁻¹ soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO₂ emission constituted 2,000 µm C g⁻¹ soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO₂ emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.« less

  2. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria.

    PubMed

    Negassa, Wakene C; Guber, Andrey K; Kravchenko, Alexandra N; Marsh, Terence L; Hildebrandt, Britton; Rivers, Mark L

    2015-01-01

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO2 emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis of amplified 16S-18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75-80% of the added plant residue was decomposed, cumulative CO2 emission constituted 1,200 µm C g(-1) soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO2 emission constituted 2,000 µm C g(-1) soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO2 emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.

  3. Properties of soil pore space regulate pathways of plant residue decomposition and community structure of associated bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negassa, Wakene C.; Guber, Andrey K.; Kravchenko, Alexandra N.

    Physical protection of soil carbon (C) is one of the important components of C storage. However, its exact mechanisms are still not sufficiently lucid. The goal of this study was to explore the influence of soil structure, that is, soil pore spatial arrangements, with and without presence of plant residue on (i) decomposition of added plant residue, (ii) CO₂ emission from soil, and (iii) structure of soil bacterial communities. The study consisted of several soil incubation experiments with samples of contrasting pore characteristics with/without plant residue, accompanied by X-ray micro-tomographic analyses of soil pores and by microbial community analysis ofmore » amplified 16S–18S rRNA genes via pyrosequencing. We observed that in the samples with substantial presence of air-filled well-connected large (>30 µm) pores, 75–80% of the added plant residue was decomposed, cumulative CO₂ emission constituted 1,200 µm C g⁻¹ soil, and movement of C from decomposing plant residue into adjacent soil was insignificant. In the samples with greater abundance of water-filled small pores, 60% of the added plant residue was decomposed, cumulative CO₂ emission constituted 2,000 µm C g⁻¹ soil, and the movement of residue C into adjacent soil was substantial. In the absence of plant residue the influence of pore characteristics on CO₂ emission, that is on decomposition of the native soil organic C, was negligible. The microbial communities on the plant residue in the samples with large pores had more microbial groups known to be cellulose decomposers, that is, Bacteroidetes, Proteobacteria, Actinobacteria, and Firmicutes, while a number of oligotrophic Acidobacteria groups were more abundant on the plant residue from the samples with small pores. This study provides the first experimental evidence that characteristics of soil pores and their air/water flow status determine the phylogenetic composition of the local microbial community and directions and magnitudes of soil C decomposition processes.« less

  4. Measuring Submicron-Sized Fractionated Particulate Matter on Aluminum Impactor Disks

    PubMed Central

    Buchholz, Bruce A.; Zermeño, Paula; Hwang, Hyun-Min; Young, Thomas M.; Guilderson, Thomas P.

    2011-01-01

    Sub-micron sized airborne particulate matter (PM) is not collected well on regular quartz or glass fiber filter papers. We used a micro-orifice uniform deposit impactor (MOUDI) to fractionate PM into six size fractions and deposit it on specially designed high purity thin aluminum disks. The MOUDI separated PM into fractions 56–100 nm, 100–180 nm, 180–320 nm, 320–560 nm, 560–1000 nm, and 1000–1800 nm. Since the MOUDI has a low flow rate (30 L/min), it takes several days to collect sufficient carbon on 47 mm foil disks. The small carbon mass (20–200 microgram C) and large aluminum substrate (~25 mg Al) present several challenges to production of graphite targets for accelerator mass spectrometry (AMS) analysis. The Al foil consumes large amounts of oxygen as it is heated and tends to melt into quartz combustion tubes, causing gas leaks. We describe sample processing techniques to reliably produce graphitic targets for 14C-AMS analysis of PM deposited on Al impact foils. PMID:22228915

  5. The use of Argo for validation and tuning of mixed layer models

    NASA Astrophysics Data System (ADS)

    Acreman, D. M.; Jeffery, C. D.

    We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.

  6. Interpersonal violence against children in sport in the Netherlands and Belgium.

    PubMed

    Vertommen, Tine; Schipper-van Veldhoven, Nicolette; Wouters, Kristien; Kampen, Jarl K; Brackenridge, Celia H; Rhind, Daniel J A; Neels, Karel; Van Den Eede, Filip

    2016-01-01

    The current article reports on the first large-scale prevalence study on interpersonal violence against children in sport in the Netherlands and Belgium. Using a dedicated online questionnaire, over 4,000 adults prescreened on having participated in organized sport before the age of 18 were surveyed with respect to their experiences with childhood psychological, physical, and sexual violence while playing sports. Being the first of its kind in the Netherlands and Belgium, our study has a sufficiently large sample taken from the general population, with a balanced gender ratio and wide variety in socio-demographic characteristics. The survey showed that 38% of all respondents reported experiences with psychological violence, 11% with physical violence, and 14% with sexual violence. Ethnic minority, lesbian/gay/bisexual (LGB) and disabled athletes, and those competing at the international level report significantly more experiences of interpersonal violence in sport. The results are consistent with rates obtained outside sport, underscoring the need for more research on interventions and systematic follow-ups, to minimize these negative experiences in youth sport. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Reconstructing three-dimensional protein crystal intensities from sparse unoriented two-axis X-ray diffraction patterns

    PubMed Central

    Lan, Ti-Yen; Wierman, Jennifer L.; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit

    2017-01-01

    Recently, there has been a growing interest in adapting serial microcrystallography (SMX) experiments to existing storage ring (SR) sources. For very small crystals, however, radiation damage occurs before sufficient numbers of photons are diffracted to determine the orientation of the crystal. The challenge is to merge data from a large number of such ‘sparse’ frames in order to measure the full reciprocal space intensity. To simulate sparse frames, a dataset was collected from a large lysozyme crystal illuminated by a dim X-ray source. The crystal was continuously rotated about two orthogonal axes to sample a subset of the rotation space. With the EMC algorithm [expand–maximize–compress; Loh & Elser (2009). Phys. Rev. E, 80, 026705], it is shown that the diffracted intensity of the crystal can still be reconstructed even without knowledge of the orientation of the crystal in any sparse frame. Moreover, parallel computation implementations were designed to considerably improve the time and memory scaling of the algorithm. The results show that EMC-based SMX experiments should be feasible at SR sources. PMID:28808431

  8. Rare and low-frequency coding variants alter human adult height

    PubMed Central

    Marouli, Eirini; Graff, Mariaelisa; Medina-Gomez, Carolina; Lo, Ken Sin; Wood, Andrew R; Kjaer, Troels R; Fine, Rebecca S; Lu, Yingchang; Schurmann, Claudia; Highland, Heather M; Rüeger, Sina; Thorleifsson, Gudmar; Justice, Anne E; Lamparter, David; Stirrups, Kathleen E; Turcot, Valérie; Young, Kristin L; Winkler, Thomas W; Esko, Tõnu; Karaderi, Tugce; Locke, Adam E; Masca, Nicholas GD; Ng, Maggie CY; Mudgal, Poorva; Rivas, Manuel A; Vedantam, Sailaja; Mahajan, Anubha; Guo, Xiuqing; Abecasis, Goncalo; Aben, Katja K; Adair, Linda S; Alam, Dewan S; Albrecht, Eva; Allin, Kristine H; Allison, Matthew; Amouyel, Philippe; Appel, Emil V; Arveiler, Dominique; Asselbergs, Folkert W; Auer, Paul L; Balkau, Beverley; Banas, Bernhard; Bang, Lia E; Benn, Marianne; Bergmann, Sven; Bielak, Lawrence F; Blüher, Matthias; Boeing, Heiner; Boerwinkle, Eric; Böger, Carsten A; Bonnycastle, Lori L; Bork-Jensen, Jette; Bots, Michiel L; Bottinger, Erwin P; Bowden, Donald W; Brandslund, Ivan; Breen, Gerome; Brilliant, Murray H; Broer, Linda; Burt, Amber A; Butterworth, Adam S; Carey, David J; Caulfield, Mark J; Chambers, John C; Chasman, Daniel I; Chen, Yii-Der Ida; Chowdhury, Rajiv; Christensen, Cramer; Chu, Audrey Y; Cocca, Massimiliano; Collins, Francis S; Cook, James P; Corley, Janie; Galbany, Jordi Corominas; Cox, Amanda J; Cuellar-Partida, Gabriel; Danesh, John; Davies, Gail; de Bakker, Paul IW; de Borst, Gert J.; de Denus, Simon; de Groot, Mark CH; de Mutsert, Renée; Deary, Ian J; Dedoussis, George; Demerath, Ellen W; den Hollander, Anneke I; Dennis, Joe G; Di Angelantonio, Emanuele; Drenos, Fotios; Du, Mengmeng; Dunning, Alison M; Easton, Douglas F; Ebeling, Tapani; Edwards, Todd L; Ellinor, Patrick T; Elliott, Paul; Evangelou, Evangelos; Farmaki, Aliki-Eleni; Faul, Jessica D; Feitosa, Mary F; Feng, Shuang; Ferrannini, Ele; Ferrario, Marco M; Ferrieres, Jean; Florez, Jose C; Ford, Ian; Fornage, Myriam; Franks, Paul W; Frikke-Schmidt, Ruth; Galesloot, Tessel E; Gan, Wei; Gandin, Ilaria; Gasparini, Paolo; Giedraitis, Vilmantas; Giri, Ayush; Girotto, Giorgia; Gordon, Scott D; Gordon-Larsen, Penny; Gorski, Mathias; Grarup, Niels; Grove, Megan L.; Gudnason, Vilmundur; Gustafsson, Stefan; Hansen, Torben; Harris, Kathleen Mullan; Harris, Tamara B; Hattersley, Andrew T; Hayward, Caroline; He, Liang; Heid, Iris M; Heikkilä, Kauko; Helgeland, Øyvind; Hernesniemi, Jussi; Hewitt, Alex W; Hocking, Lynne J; Hollensted, Mette; Holmen, Oddgeir L; Hovingh, G. Kees; Howson, Joanna MM; Hoyng, Carel B; Huang, Paul L; Hveem, Kristian; Ikram, M. Arfan; Ingelsson, Erik; Jackson, Anne U; Jansson, Jan-Håkan; Jarvik, Gail P; Jensen, Gorm B; Jhun, Min A; Jia, Yucheng; Jiang, Xuejuan; Johansson, Stefan; Jørgensen, Marit E; Jørgensen, Torben; Jousilahti, Pekka; Jukema, J Wouter; Kahali, Bratati; Kahn, René S; Kähönen, Mika; Kamstrup, Pia R; Kanoni, Stavroula; Kaprio, Jaakko; Karaleftheri, Maria; Kardia, Sharon LR; Karpe, Fredrik; Kee, Frank; Keeman, Renske; Kiemeney, Lambertus A; Kitajima, Hidetoshi; Kluivers, Kirsten B; Kocher, Thomas; Komulainen, Pirjo; Kontto, Jukka; Kooner, Jaspal S; Kooperberg, Charles; Kovacs, Peter; Kriebel, Jennifer; Kuivaniemi, Helena; Küry, Sébastien; Kuusisto, Johanna; La Bianca, Martina; Laakso, Markku; Lakka, Timo A; Lange, Ethan M; Lange, Leslie A; Langefeld, Carl D; Langenberg, Claudia; Larson, Eric B; Lee, I-Te; Lehtimäki, Terho; Lewis, Cora E; Li, Huaixing; Li, Jin; Li-Gao, Ruifang; Lin, Honghuang; Lin, Li-An; Lin, Xu; Lind, Lars; Lindström, Jaana; Linneberg, Allan; Liu, Yeheng; Liu, Yongmei; Lophatananon, Artitaya; Luan, Jian'an; Lubitz, Steven A; Lyytikäinen, Leo-Pekka; Mackey, David A; Madden, Pamela AF; Manning, Alisa K; Männistö, Satu; Marenne, Gaëlle; Marten, Jonathan; Martin, Nicholas G; Mazul, Angela L; Meidtner, Karina; Metspalu, Andres; Mitchell, Paul; Mohlke, Karen L; Mook-Kanamori, Dennis O; Morgan, Anna; Morris, Andrew D; Morris, Andrew P; Müller-Nurasyid, Martina; Munroe, Patricia B; Nalls, Mike A; Nauck, Matthias; Nelson, Christopher P; Neville, Matt; Nielsen, Sune F; Nikus, Kjell; Njølstad, Pål R; Nordestgaard, Børge G; Ntalla, Ioanna; O'Connel, Jeffrey R; Oksa, Heikki; Loohuis, Loes M Olde; Ophoff, Roel A; Owen, Katharine R; Packard, Chris J; Padmanabhan, Sandosh; Palmer, Colin NA; Pasterkamp, Gerard; Patel, Aniruddh P; Pattie, Alison; Pedersen, Oluf; Peissig, Peggy L; Peloso, Gina M; Pennell, Craig E; Perola, Markus; Perry, James A; Perry, John R.B.; Person, Thomas N; Pirie, Ailith; Polasek, Ozren; Posthuma, Danielle; Raitakari, Olli T; Rasheed, Asif; Rauramaa, Rainer; Reilly, Dermot F; Reiner, Alex P; Renström, Frida; Ridker, Paul M; Rioux, John D; Robertson, Neil; Robino, Antonietta; Rolandsson, Olov; Rudan, Igor; Ruth, Katherine S; Saleheen, Danish; Salomaa, Veikko; Samani, Nilesh J; Sandow, Kevin; Sapkota, Yadav; Sattar, Naveed; Schmidt, Marjanka K; Schreiner, Pamela J; Schulze, Matthias B; Scott, Robert A; Segura-Lepe, Marcelo P; Shah, Svati; Sim, Xueling; Sivapalaratnam, Suthesh; Small, Kerrin S; Smith, Albert Vernon; Smith, Jennifer A; Southam, Lorraine; Spector, Timothy D; Speliotes, Elizabeth K; Starr, John M; Steinthorsdottir, Valgerdur; Stringham, Heather M; Stumvoll, Michael; Surendran, Praveen; Hart, Leen M ‘t; Tansey, Katherine E; Tardif, Jean-Claude; Taylor, Kent D; Teumer, Alexander; Thompson, Deborah J; Thorsteinsdottir, Unnur; Thuesen, Betina H; Tönjes, Anke; Tromp, Gerard; Trompet, Stella; Tsafantakis, Emmanouil; Tuomilehto, Jaakko; Tybjaerg-Hansen, Anne; Tyrer, Jonathan P; Uher, Rudolf; Uitterlinden, André G; Ulivi, Sheila; van der Laan, Sander W; Van Der Leij, Andries R; van Duijn, Cornelia M; van Schoor, Natasja M; van Setten, Jessica; Varbo, Anette; Varga, Tibor V; Varma, Rohit; Edwards, Digna R Velez; Vermeulen, Sita H; Vestergaard, Henrik; Vitart, Veronique; Vogt, Thomas F; Vozzi, Diego; Walker, Mark; Wang, Feijie; Wang, Carol A; Wang, Shuai; Wang, Yiqin; Wareham, Nicholas J; Warren, Helen R; Wessel, Jennifer; Willems, Sara M; Wilson, James G; Witte, Daniel R; Woods, Michael O; Wu, Ying; Yaghootkar, Hanieh; Yao, Jie; Yao, Pang; Yerges-Armstrong, Laura M; Young, Robin; Zeggini, Eleftheria; Zhan, Xiaowei; Zhang, Weihua; Zhao, Jing Hua; Zhao, Wei; Zhao, Wei; Zheng, He; Zhou, Wei; Rotter, Jerome I; Boehnke, Michael; Kathiresan, Sekar; McCarthy, Mark I; Willer, Cristen J; Stefansson, Kari; Borecki, Ingrid B; Liu, Dajiang J; North, Kari E; Heard-Costa, Nancy L; Pers, Tune H; Lindgren, Cecilia M; Oxvig, Claus; Kutalik, Zoltán; Rivadeneira, Fernando; Loos, Ruth JF; Frayling, Timothy M; Hirschhorn, Joel N; Deloukas, Panos; Lettre, Guillaume

    2016-01-01

    Summary Height is a highly heritable, classic polygenic trait with ∼700 common associated variants identified so far through genome-wide association studies. Here, we report 83 height-associated coding variants with lower minor allele frequencies (range of 0.1-4.8%) and effects of up to 2 cm/allele (e.g. in IHH, STC2, AR and CRISPLD2), >10 times the average effect of common variants. In functional follow-up studies, rare height-increasing alleles of STC2 (+1-2 cm/allele) compromised proteolytic inhibition of PAPP-A and increased cleavage of IGFBP-4 in vitro, resulting in higher bioavailability of insulin-like growth factors. These 83 height-associated variants overlap genes mutated in monogenic growth disorders and highlight new biological candidates (e.g. ADAMTS3, IL11RA, NOX4) and pathways (e.g. proteoglycan/glycosaminoglycan synthesis) involved in growth. Our results demonstrate that sufficiently large sample sizes can uncover rare and low-frequency variants of moderate to large effect associated with polygenic human phenotypes, and that these variants implicate relevant genes and pathways. PMID:28146470

  9. Verification of the anatomy and newly discovered histology of the G-spot complex.

    PubMed

    Ostrzenski, A; Krajewski, P; Ganjei-Azar, P; Wasiutynski, A J; Scheinberg, M N; Tarka, S; Fudalej, M

    2014-10-01

    To expand the anatomical investigations of the G-spot and to assess the G-spot's characteristic histological and immunohistochemical features. An observational study. International multicentre. Eight consecutive fresh human female cadavers. Anterior vaginal wall dissections were executed and G-spot microdissections were performed. All specimens were stained with haematoxylin and eosin (H&E). The tissues of two women were selected at random for immunohistochemical staining. The primary outcome measure was to document the anatomy of the G-spot. The secondary outcome measures were to identify the histology of the G-spot and to determine whether histological samples stained with H&E are sufficient to identify the G-spot. The anatomical existence of the G-spot was identified in all women and was in a diagonal plane. In seven (87.5%) and one (12.5%) of the women the G-spot complex was found on the left or right side, respectively. The G-spot was intimately fused with vessels, creating a complex. A large tangled vein-like vascular structure resembled an arteriovenous malformation and there were a few smaller feeding arteries. A band-like structure protruded from the tail of the G-spot. The size of the G-spot varied. Histologically, the G-spot was determined as a neurovascular complex structure. The neural component contained abundant peripheral nerve bundles and a nerve ganglion. The vascular component comprised large vein-like vessels and smaller feeding arteries. Circular and longitudinal muscles covered the G-complex. The anatomy of the G-spot complex was confirmed. The histology of the G-spot presents as neurovascular tissues with a nerve ganglion. H&E staining is sufficient for the identification of the G-spot complex. © 2014 Royal College of Obstetricians and Gynaecologists.

  10. Teaching Students about Plagiarism: An Internet Solution to an Internet Problem

    ERIC Educational Resources Information Center

    Snow, Eleanour

    2006-01-01

    The Internet has changed the ways that students think, learn, and write. Students have large amounts of information, largely anonymous and without clear copyright information, literally at their fingertips. Without sufficient guidance, the inappropriate use of this information seems inevitable. Plagiarism among college students is rising, due to…

  11. A Mach-Zender digital holographic microscope with sub-micrometer resolution for imaging and tracking of marine micro-organisms

    NASA Astrophysics Data System (ADS)

    Kühn, Jonas; Niraula, Bimochan; Liewer, Kurt; Kent Wallace, J.; Serabyn, Eugene; Graff, Emilio; Lindensmith, Christian; Nadeau, Jay L.

    2014-12-01

    Digital holographic microscopy is an ideal tool for investigation of microbial motility. However, most designs do not exhibit sufficient spatial resolution for imaging bacteria. In this study we present an off-axis Mach-Zehnder design of a holographic microscope with spatial resolution of better than 800 nm and the ability to resolve bacterial samples at varying densities over a 380 μm × 380 μm × 600 μm three-dimensional field of view. Larger organisms, such as protozoa, can be resolved in detail, including cilia and flagella. The instrument design and performance are presented, including images and tracks of bacterial and protozoal mixed samples and pure cultures of six selected species. Organisms as small as 1 μm (bacterial spores) and as large as 60 μm (Paramecium bursaria) may be resolved and tracked without changes in the instrument configuration. Finally, we present a dilution series investigating the maximum cell density that can be imaged, a type of analysis that has not been presented in previous holographic microscopy studies.

  12. Detection of internal structure by scattered light intensity: Application to kidney cell sorting

    NASA Technical Reports Server (NTRS)

    Goolsby, C. L.; Kunze, M. E.

    1985-01-01

    Scattered light measurements in flow cytometry were sucessfully used to distinguish cells on the basis of differing morphology and internal structure. Differences in scattered light patterns due to changes in internal structure would be expected to occur at large scattering angles. Practically, the results of these calculations suggest that in experimental situations an array of detectors would be useful. Although in general the detection of the scattered light intensity at several intervals within the 10 to 60 region would be sufficient, there are many examples where increased sensitivity could be acheived at other angles. The ability to measure at many different angular intervals would allow the experimenter to empirically select the optimum intervals for the varying conditions of cell size, N/C ratio, granule size and internal structure from sample to sample. The feasibility of making scattered light measurements at many different intervals in flow cytometry was demonstrated. The implementation of simplified versions of these techniques in conjunction with independant measurements of cell size could potentially improve the usefulness of flow cytometry in the study of the internal structure of cells.

  13. An open experimental database for exploring inorganic materials

    DOE PAGES

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; ...

    2018-04-03

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less

  14. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    PubMed

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  15. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. An open experimental database for exploring inorganic materials.

    PubMed

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb

    2018-04-03

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.

  17. An open experimental database for exploring inorganic materials

    PubMed Central

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D.; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb

    2018-01-01

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource. PMID:29611842

  18. Monitoring Earth's Shortwave Reflectance: GEO Instrument Concept

    NASA Technical Reports Server (NTRS)

    Brageot, Emily; Mercury, Michael; Green, Robert; Mouroulis, Pantazis; Gerwe, David

    2015-01-01

    In this paper we present a GEO instrument concept dedicated to monitoring the Earth's global spectral reflectance with a high revisit rate. Based on our measurement goals, the ideal instrument needs to be highly sensitive (SNR greater than 100) and to achieve global coverage with spectral sampling (less than or equal to 10nm) and spatial sampling (less than or equal to 1km) over a large bandwidth (380-2510 nm) with a revisit time (greater than or equal to greater than or equal to 3x/day) sufficient to fully measure the spectral-radiometric-spatial evolution of clouds and confounding factor during daytime. After a brief study of existing instruments and their capabilities, we choose to use a GEO constellation of up to 6 satellites as a platform for this instrument concept in order to achieve the revisit time requirement with a single launch. We derive the main parameters of the instrument and show the above requirements can be fulfilled while retaining an instrument architecture as compact as possible by controlling the telescope aperture size and using a passively cooled detector.

  19. Analytic nuclear forces and molecular properties from full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Robert E.; Overy, Catherine; Opalka, Daniel

    Unbiased stochastic sampling of the one- and two-body reduced density matrices is achieved in full configuration interaction quantum Monte Carlo with the introduction of a second, “replica” ensemble of walkers, whose population evolves in imaginary time independently from the first and which entails only modest additional computational overheads. The matrices obtained from this approach are shown to be representative of full configuration-interaction quality and hence provide a realistic opportunity to achieve high-quality results for a range of properties whose operators do not necessarily commute with the Hamiltonian. A density-matrix formulated quasi-variational energy estimator having been already proposed and investigated, themore » present work extends the scope of the theory to take in studies of analytic nuclear forces, molecular dipole moments, and polarisabilities, with extensive comparison to exact results where possible. These new results confirm the suitability of the sampling technique and, where sufficiently large basis sets are available, achieve close agreement with experimental values, expanding the scope of the method to new areas of investigation.« less

  20. An open experimental database for exploring inorganic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less

  1. Genetic assessment of connectivity in the common reef sponge, Callyspongia vaginalis (Demospongiae: Haplosclerida) reveals high population structure along the Florida reef tract

    NASA Astrophysics Data System (ADS)

    Debiasse, M. B.; Richards, V. P.; Shivji, M. S.

    2010-03-01

    The genetic population structure of the common branching vase sponge, Callyspongia vaginalis, was determined along the entire length (465 km) of the Florida reef system from Palm Beach to the Dry Tortugas based on sequences of the mitochondrial cytochrome c oxidase subunit 1 (COI) gene. Populations of C. vaginalis were highly structured (overall ΦST = 0.33), in some cases over distances as small as tens of kilometers. However, nonsignificant pairwise ΦST values were also found between a few relatively distant sampling sites suggesting that some long distance larval dispersal may occur via ocean currents or transport in sponge fragments along continuous, shallow coastlines. Indeed, sufficient gene flow appears to occur along the Florida reef tract to obscure a signal of isolation by distance, but not to homogenize COI haplotype frequencies. The strong genetic differentiation among most of the sampling locations suggests that recruitment in this species is largely local source-driven, pointing to the importance of further elucidating general connectivity patterns along the Florida reef tract to guide the spatial scale of management efforts.

  2. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  3. Cooling system for continuous metal casting machines

    DOEpatents

    Draper, Robert; Sumpman, Wayne C.; Baker, Robert J.; Williams, Robert S.

    1988-01-01

    A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles 19 against the inner surface of rim 13 at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers 30 through return pipes 25 distributed interstitially among the nozzles.

  4. Cooling system for continuous metal casting machines

    DOEpatents

    Draper, R.; Sumpman, W.C.; Baker, R.J.; Williams, R.S.

    1988-06-07

    A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles against the inner surface of rim at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers through return pipes distributed interstitially among the nozzles. 9 figs.

  5. Impression cytology: a novel sampling technique for conjunctival cytology of the feline eye.

    PubMed

    Eördögh, Réka; Schwendenwein, Ilse; Tichy, Alexander; Nell, Barbara

    2015-07-01

    Impression cytology is a noninvasive investigation of the ocular surface. It uses the adhesive features of different filter papers to collect a monolayer of epithelial cells from the cornea and/or conjunctiva. Samples obtained by impression cytology exhibit all characteristics of an ideal cytology specimen. The aim of this study was to test the feasibility of impression cytology and determine the most appropriate filter paper to achieve maximum diagnostic value of the feline eye. Ten healthy cats. The study was conducted in two phases. In the first phase, eight different filter papers (FPs) with various pore sizes were tested: 3.0-, 1.2-, 0.8-, 0.45-, 0.22-, 0.05- and 0.025-μm cellulose acetate papers and a 0.4-μm Biopore membrane (BM). Samples were obtained from the superior bulbar and from the inferior palpebral conjunctiva. In the second phase, three different sampling methods - with and without topical anesthesia, and with topical anesthesia and drying of the conjunctiva - were compared employing the BM encased in the intended BM device (BMD). Samples were evaluated for cellularity and quality of cells. In the first phase, samples obtained from the superior bulbar conjunctiva with the BM had the most sufficient cellularity and quality. In the second phase, BMD with topical anesthesia and additional drying of the conjunctiva was the most ideal method. The BMD may prove to be a suitable diagnostic tool for clinicians. Sampling is quick, processing is simple, and a large area of intact cells can be harvested. © 2014 American College of Veterinary Ophthalmologists.

  6. Adaptable Constrained Genetic Programming: Extensions and Applications

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.

    2005-01-01

    An evolutionary algorithm applies evolution-based principles to problem solving. To solve a problem, the user defines the space of potential solutions, the representation space. Sample solutions are encoded in a chromosome-like structure. The algorithm maintains a population of such samples, which undergo simulated evolution by means of mutation, crossover, and survival of the fittest principles. Genetic Programming (GP) uses tree-like chromosomes, providing very rich representation suitable for many problems of interest. GP has been successfully applied to a number of practical problems such as learning Boolean functions and designing hardware circuits. To apply GP to a problem, the user needs to define the actual representation space, by defining the atomic functions and terminals labeling the actual trees. The sufficiency principle requires that the label set be sufficient to build the desired solution trees. The closure principle allows the labels to mix in any arity-consistent manner. To satisfy both principles, the user is often forced to provide a large label set, with ad hoc interpretations or penalties to deal with undesired local contexts. This unfortunately enlarges the actual representation space, and thus usually slows down the search. In the past few years, three different methodologies have been proposed to allow the user to alleviate the closure principle by providing means to define, and to process, constraints on mixing the labels in the trees. Last summer we proposed a new methodology to further alleviate the problem by discovering local heuristics for building quality solution trees. A pilot system was implemented last summer and tested throughout the year. This summer we have implemented a new revision, and produced a User's Manual so that the pilot system can be made available to other practitioners and researchers. We have also designed, and partly implemented, a larger system capable of dealing with much more powerful heuristics.

  7. Reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy and American Society for Reproductive Medicine classification systems for congenital uterine anomalies detected using three-dimensional ultrasonography.

    PubMed

    Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan

    2015-09-01

    To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  8. [Pathological Internet use--epidemiology, diagnostics, co-occurring disorders and treatment].

    PubMed

    Petersen, K U; Weymann, N; Schelb, Y; Thiel, R; Thomasius, R

    2009-05-01

    In 2009, we can look back on a history of 40 years of internet use. While most consumers make use of the internet in a controlled fashion, a progressive loss of the ability to control the frequency and duration of internet activities emerges in some users. As a consequence, the excessive time devoted to internet use and the behavioural narrowing can lead to dramatic psychosocial outcomes. This phenomenon is referred to as "pathological internet use" (PIU). On behalf of the German ministry of health a systematic review of the literature since 1996 has been carried out. The main results will be presented in this review. Prevalence data on pathological internet use are limited by methodological difficulties concerning the diagnosis and the heterogeneity of diagnostical instruments. International prevalence rates range from 1.5 % to 8.2 %. Annual studies on representative samples of the German population describe their internet use and patterns of use, but information on the prevalence of PIU is missing. Diagnostical instruments are needed that show sufficient reliability and validity and allow international comparisons. Research on the Dutch "Compulsive Internet Use Scale" may close this gap. Cross-sectional studies on samples of patients report high comorbidity of PIU with psychiatric disorders, e. g. affective disorder and attention deficit hyperactivity disorder (ADHD). If PIU and these co-occurring disorders could be explained by shared risk factors or better as secondary disorders is largely unknown. The treatment currently is based on therapeutical interventions and strategies successful in the treatment of substance use disorders. Due to the lack of methodological sufficient research it is currently impossible to recommend any evidence-based treatment of PIU.

  9. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  10. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  11. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  12. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  13. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    NASA Astrophysics Data System (ADS)

    Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m2.

  14. Efficiency of Different Sampling Tools for Aquatic Macroinvertebrate Collections in Malaysian Streams

    PubMed Central

    Ghani, Wan Mohd Hafezul Wan Abdul; Rawi, Che Salmah Md; Hamid, Suhaila Abd; Al-Shami, Salman Abdo

    2016-01-01

    This study analyses the sampling performance of three benthic sampling tools commonly used to collect freshwater macroinvertebrates. Efficiency of qualitative D-frame and square aquatic nets were compared to a quantitative Surber sampler in tropical Malaysian streams. The abundance and diversity of macroinvertebrates collected using each tool evaluated along with their relative variations (RVs). Each tool was used to sample macroinvertebrates from three streams draining different areas: a vegetable farm, a tea plantation and a forest reserve. High macroinvertebrate diversities were recorded using the square net and Surber sampler at the forested stream site; however, very low species abundance was recorded by the Surber sampler. Relatively large variations in the Surber sampler collections (RVs of 36% and 28%) were observed for the vegetable farm and tea plantation streams, respectively. Of the three sampling methods, the square net was the most efficient, collecting a greater diversity of macroinvertebrate taxa and a greater number of specimens (i.e., abundance) overall, particularly from the vegetable farm and the tea plantation streams (RV<25%). Fewer square net sample passes (<8 samples) were sufficient to perform a biological assessment of water quality, but each sample required a slightly longer processing time (±20 min) compared with those gathered via the other samplers. In conclusion, all three apparatuses were suitable for macroinvertebrate collection in Malaysian streams and gathered assemblages that resulted in the determination of similar biological water quality classes using the Family Biotic Index (FBI) and the Biological Monitoring Working Party (BMWP). However, despite a slightly longer processing time, the square net was more efficient (lowest RV) at collecting samples and more suitable for the collection of macroinvertebrates from deep, fast flowing, wadeable streams with coarse substrates. PMID:27019685

  15. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  16. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  17. Rapid Quantification of Abscisic Acid by GC-MS/MS for Studies of Abiotic Stress Response.

    PubMed

    Verslues, Paul E

    2017-01-01

    Drought and low water potential induce large increases in Abscisic Acid (ABA ) content of plant tissue. This increased ABA content is essential to regulate downstream stress resistance responses; however, the mechanisms regulating ABA accumulation are incompletely known. Thus, the ability to accurately quantify ABA at high throughput and low cost is important for plant stress research. We have combined and modified several previously published protocols to establish a rapid ABA analysis protocol using gas chromatography-tandem mass spectrometry (GC-MS/MS). Derivatization of ABA is performed with (trimethylsilyl)-diazomethane rather than the harder to prepare diazomethane. Sensitivity of the analysis is sufficient that small samples of low water potential treated Arabidopsis thaliana seedlings can be routinely analyzed in reverse genetic studies of putative stress regulators as well as studies of natural variation in ABA accumulation.

  18. Genetic analysis of rock hole and domestic Aedes aegypti on the Caribbean island of Anguilla.

    PubMed

    Wallis, G P; Tabachnick, W J

    1990-12-01

    Genetic variation was characterized at 11 enzyme coding loci in Aedes aegypti collected from 3 rock hole and 4 domestic sites on the island of Anguilla, West Indies. The pattern of gene frequency variation suggests that these mosquito samples do not constitute a single panmictic population, but there are no large consistent differences between rock hole and domestic forms to parallel the East African sylvan-domestic dichotomy. With the exception of one of the domestic populations, two loci did however show some gene frequency differences consistent with genetic differentiation between the 2 habitat types. We conclude that whereas there may be some degree of differentiation between the 2 habitat types, local eradication attempts and sporadic gene flow cause temporal and spatial volatility that is sufficient to swamp these differences.

  19. Metaresearch for Evaluating Reproducibility in Ecology and Evolution

    PubMed Central

    Fidler, Fiona; Chee, Yung En; Wintle, Bonnie C.; Burgman, Mark A.; McCarthy, Michael A.; Gordon, Ascelin

    2017-01-01

    Abstract Recent replication projects in other disciplines have uncovered disturbingly low levels of reproducibility, suggesting that those research literatures may contain unverifiable claims. The conditions contributing to irreproducibility in other disciplines are also present in ecology. These include a large discrepancy between the proportion of “positive” or “significant” results and the average statistical power of empirical research, incomplete reporting of sampling stopping rules and results, journal policies that discourage replication studies, and a prevailing publish-or-perish research culture that encourages questionable research practices. We argue that these conditions constitute sufficient reason to systematically evaluate the reproducibility of the evidence base in ecology and evolution. In some cases, the direct replication of ecological research is difficult because of strong temporal and spatial dependencies, so here, we propose metaresearch projects that will provide proxy measures of reproducibility. PMID:28596617

  20. Adult honey bee losses in Utah as related to arsenic poisoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knowlton, G.F.; Sturtevant, A.P.; Sorenson, C.J.

    1950-08-01

    A study has been conducted to determine the source of arsenic that has caused serious losses of honey bees in Utah. Samples of dead and dying bees, pollen, plant blossoms, soil, pond water, algae, and moss were collected and analyzed for the presence of arsenic. Although some of the deaths were caused by improperly timed orchard spraying, a large percentage of arsenical materials found in blossoms must have come from some source other than through plant absorption from the soil. Plants apparently do not take up sufficient quantities of arsenic from the soil to poison bees. The data support themore » conclusion that most honey bee losses were caused by arsenic containing dusts from the operation of smelters. Some beekeepers reported that losses were especially noticeable after a light rain following a period of drought.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harner, E.J.; Gilfillan, E.S.

    Two large shoreline assessment studies conducted in 1990 in Prince William Sound, Alaska, after the Exxon Valdez oil spill used different design strategies to determine the impact of oiling on shoreline biota. One of the studies, the Coastal Habitat Injury Assessment (CHIA) conducted for the Exxon Valdez Oil Spill Council, used matched pairs of sites, normal population distributions for biota, and meta-analysis. The power of the CHIA study to detect oiling impacts depends on being able to identify and select appropriate pairs of sites for comparison. The CHIA study also increased the oiling signal by focusing on moderate to heavilymore » oiled sites. The Shoreline Ecology Program (SEP), conducted for Exxon, used a stratified-random-sampling study design, normal and non-normal population distributions and covariates. The SEP study was able to detect oiling impacts by using a sufficient number of sites and widely spaced transects.« less

  2. Quantifying the utilization of medical devices necessary to detect postmarket safety differences: A case study of implantable cardioverter defibrillators.

    PubMed

    Bates, Jonathan; Parzynski, Craig S; Dhruva, Sanket S; Coppi, Andreas; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Shaw, Richard E; Warner, Frederick; Krumholz, Harlan M; Ross, Joseph S

    2018-06-12

    To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Bridging scale gaps between regional maps of forest aboveground biomass and field sampling plots using TanDEM-X data

    NASA Astrophysics Data System (ADS)

    Ni, W.; Zhang, Z.; Sun, G.

    2017-12-01

    Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in this study and forest AGB map was shown in Figure 2. The technique details and further analysis will be given in the final report. AcknowledgmentThis work was supported in part by the National Basic Research Program of China (Grant No. 2013CB733401, 2013CB733404), and in part by the National Natural Science Foundation of China (Grant Nos. 41471311, 41371357, 41301395).

  4. ELECTROFISHING DISTANCE AND NUMBER OF SPECIES COLLECTED FROM THREE RAFTABLE WESTERN RIVERS

    EPA Science Inventory

    A key issue in assessing a fish assemblage at a site is determining a sufficient sampling effort to adequately represent the species in an assemblage. Inadequate effort produces considerable noise in multiple samples at the site or under-represents the species present. Excessiv...

  5. Psychometric Properties of the Foot and Ankle Outcome Score in a Community-Based Study of Adults with and without Osteoarthritis

    PubMed Central

    Golightly, Yvonne M.; DeVellis, Robert F.; Nelson, Amanda E.; Hannan, Marian T.; Lohmander, L. Stefan; Renner, Jordan B.; Jordan, Joanne M.

    2014-01-01

    Objective Foot and ankle problems are common in adults, and large observational studies are needed to advance our understanding of the etiology and impact of these conditions. Valid and reliable measures of foot and ankle symptoms and physical function are necessary for this research. This study examined psychometric properties of the Foot and Ankle Outcome Score (FAOS) subscales (pain, other symptoms, activities of daily living [ADL], sport and recreational function [Sport/Recreation], and foot and ankle related quality of life [QOL]) in a large, community-based sample of African American and Caucasian men and women 50+ years old. Methods Johnston County Osteoarthritis Project participants (N=1670) completed the 42-item FAOS (mean age 69 years, 68% women, 31% African American, mean body mass index [BMI] 31.5 kg/m2). Internal consistency, test-retest reliability, convergent validity, and structural validity of each subscale were examined for the sample and for subgroups according to race, gender, age, BMI, presence of knee or hip osteoarthritis, and presence of knee, hip or low back symptoms. Results For the sample and each subgroup, Cronbach’s alphas were 0.95–0.97 (pain), 0.97–0.98 (ADL), 0.94–0.96 (Sport/Recreation), 0.89–0.92(QOL), and 0.72–0.82 (symptoms). Correlation coefficients were 0.24–0.52 for pain and symptoms subscales with foot and ankle symptoms and 0.30–0.55 for ADL and Sport/Recreation subscales with Western Ontario and McMaster Universities Osteoarthritis Index function subscale. Intraclass correlation coefficients for test-retest reliability were 0.63–0.81. Items loaded on a single factor for each subscale except symptoms (2 factors). Conclusions The FAOS exhibited sufficient reliability and validity in this large cohort study. PMID:24023029

  6. Psychometric properties of the foot and ankle outcome score in a community-based study of adults with and without osteoarthritis.

    PubMed

    Golightly, Yvonne M; Devellis, Robert F; Nelson, Amanda E; Hannan, Marian T; Lohmander, L Stefan; Renner, Jordan B; Jordan, Joanne M

    2014-03-01

    Foot and ankle problems are common in adults, and large observational studies are needed to advance our understanding of the etiology and impact of these conditions. Valid and reliable measures of foot and ankle symptoms and physical function are necessary for this research. This study examined psychometric properties of the Foot and Ankle Outcome Score (FAOS) subscales (pain, other symptoms, activities of daily living [ADL], sport and recreational function [sport/recreation], and foot- and ankle-related quality of life [QOL]) in a large, community-based sample of African American and white men and women ages ≥50 years. Johnston County Osteoarthritis Project participants (n = 1,670) completed the 42-item FAOS (mean age 69 years, 68% women, 31% African American, mean body mass index [BMI] 31.5 kg/m(2) ). Internal consistency, test-retest reliability, convergent validity, and structural validity of each subscale were examined for the sample and for subgroups according to race, sex, age, BMI, presence of knee or hip osteoarthritis, and presence of knee, hip, or low back symptoms. For the sample and each subgroup, Cronbach's alpha coefficients ranged from 0.95-0.97 (pain), 0.97-0.98 (ADL), 0.94-0.96 (sport/recreation), 0.89-0.92 (QOL), and 0.72-0.82 (symptoms). Correlation coefficients ranged from 0.24-0.52 for pain and symptoms subscales with foot and ankle symptoms and from 0.30-0.55 for ADL and sport/recreation subscales with the Western Ontario and McMaster Universities Osteoarthritis Index function subscale. Intraclass correlation coefficients for test-retest reliability ranged from 0.63-0.81. Items loaded on a single factor for each subscale except symptoms (2 factors). The FAOS exhibited sufficient reliability and validity in this large cohort study. Copyright © 2014 by the American College of Rheumatology.

  7. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle.

    PubMed

    Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.

  8. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle

    PubMed Central

    Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656

  9. Influence of coronary artery diameter on eNOS protein content

    NASA Technical Reports Server (NTRS)

    Laughlin, M. H.; Turk, J. R.; Schrage, W. G.; Woodman, C. R.; Price, E. M.

    2003-01-01

    The purpose of this study was to test the hypothesis that the content of endothelial nitric oxide synthase (eNOS) protein (eNOS protein/g total artery protein) increases with decreasing artery diameter in the coronary arterial tree. Content of eNOS protein was determined in porcine coronary arteries with immunoblot analysis. Arteries were isolated in six size categories from each heart: large arteries [301- to 2,500-microm internal diameter (ID)], small arteries (201- to 300-microm ID), resistance arteries (151- to 200-microm ID), large arterioles (101- to 150-microm ID), intermediate arterioles (51- to 100-microm ID), and small arterioles(<50-microm ID). To obtain sufficient protein for analysis from small- and intermediate-sized arterioles, five to seven arterioles 1-2 mm in length were pooled into one sample for each animal. Results establish that the number of smooth muscle cells per endothelial cell decreases from a number of 10 to 15 in large coronary arteries to 1 in the smallest arterioles. Immunohistochemistry revealed that eNOS is located only in endothelial cells in all sizes of coronary artery and in coronary capillaries. Contrary to our hypothesis, eNOS protein content did not increase with decreasing size of coronary artery. Indeed, the smallest coronary arterioles had less eNOS protein per gram of total protein than the large coronary arteries. These results indicate that eNOS protein content is greater in the endothelial cells of conduit arteries, resistance arteries, and large arterioles than in small coronary arterioles.

  10. Effect of H-wave polarization on laser radar detection of partially convex targets in random media.

    PubMed

    El-Ocla, Hosam

    2010-07-01

    A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.

  11. Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour

    NASA Astrophysics Data System (ADS)

    Prior, C.; Yeates, A. R.

    2016-06-01

    Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.

  12. Swashed away? Storm impacts on sandy beach macrofaunal communities

    NASA Astrophysics Data System (ADS)

    Harris, Linda; Nel, Ronel; Smale, Malcolm; Schoeman, David

    2011-09-01

    Storms can have a large impact on sandy shores, with powerful waves eroding large volumes of sand off the beach. Resulting damage to the physical environment has been well-studied but the ecological implications of these natural phenomena are less known. Since climate change predictions suggest an increase in storminess in the near future, understanding these ecological implications is vital if sandy shores are to be proactively managed for resilience. Here, we report on an opportunistic experiment that tests the a priori expectation that storms impact beach macrofaunal communities by modifying natural patterns of beach morphodynamics. Two sites at Sardinia Bay, South Africa, were sampled for macrofauna and physical descriptors following standard sampling methods. This sampling took place five times at three- to four-month intervals between April 2008 and August 2009. The second and last sampling events were undertaken after unusually large storms, the first of which was sufficiently large to transform one site from a sandy beach into a mixed shore for the first time in living memory. A range of univariate (linear mixed-effects models) and multivariate (e.g. non-metric multidimensional scaling, PERMANOVA) methods were employed to describe trends in the time series, and to explore the likelihood of possible explanatory mechanisms. Macrofaunal communities at the dune-backed beach (Site 2) withstood the effects of the first storm but were altered significantly by the second storm. In contrast, macrofauna communities at Site 1, where the supralittoral had been anthropogenically modified so that exchange of sediments with the beach was limited, were strongly affected by the first storm and showed little recovery over the study period. In line with predictions from ecological theory, beach morphodynamics was found to be a strong driver of temporal patterns in the macrofaunal community structure, with the storm events also identified as a significant factor, likely because of their direct effects on beach morphodynamics. Our results also support those of other studies suggesting that developed shores are more impacted by storms than are undeveloped shores. Whilst recognising we cannot generalise too far beyond our limited study, our results contribute to the growing body of evidence that interactions between sea-level rise, increasing storminess and the expansion of anthropogenic modifications to the shoreline will place functional beach ecosystems under severe pressure over the forthcoming decades and we therefore encourage further, formal testing of these concepts.

  13. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  14. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    PubMed

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  15. DNA Yield From Tissue Samples in Surgical Pathology and Minimum Tissue Requirements for Molecular Testing.

    PubMed

    Austin, Melissa C; Smith, Christina; Pritchard, Colin C; Tait, Jonathan F

    2016-02-01

    Complex molecular assays are increasingly used to direct therapy and provide diagnostic and prognostic information but can require relatively large amounts of DNA. To provide data to pathologists to help them assess tissue adequacy and provide prospective guidance on the amount of tissue that should be procured. We used slide-based measurements to establish a relationship between processed tissue volume and DNA yield by A260 from 366 formalin-fixed, paraffin-embedded tissue samples submitted for the 3 most common molecular assays performed in our laboratory (EGFR, KRAS, and BRAF). We determined the average DNA yield per unit of tissue volume, and we used the distribution of DNA yields to calculate the minimum volume of tissue that should yield sufficient DNA 99% of the time. All samples with a volume greater than 8 mm(3) yielded at least 1 μg of DNA, and more than 80% of samples producing less than 1 μg were extracted from less than 4 mm(3) of tissue. Nine square millimeters of tissue should produce more than 1 μg of DNA 99% of the time. We conclude that 2 tissue cores, each 1 cm long and obtained with an 18-gauge needle, will almost always provide enough DNA for complex multigene assays, and our methodology may be readily extrapolated to individual institutional practice.

  16. No evidence of extraterrestrial noble metal and helium anomalies at Marinoan glacial termination

    NASA Astrophysics Data System (ADS)

    Peucker-Ehrenbrink, Bernhard; Waters, Christine A.; Kurz, Mark D.; Hoffman, Paul F.

    2016-03-01

    High concentrations of extraterrestrial iridium have been reported in terminal Sturtian and Marinoan glacial marine sediments and are used to argue for long (likely 3-12 Myr) durations of these Cryogenian glaciations. Reanalysis of the Marinoan sedimentary rocks used in the original study, supplemented by sedimentary rocks from additional terminal Marinoan sections, however, does not confirm the initial report. New platinum group element concentrations, and 187Os/188Os and 3He/4He signatures are consistent with crustal origin and minimal extraterrestrial contributions. The discrepancy is likely caused by different sample masses used in the two studies, with this study being based on much larger samples that better capture the stochastic distribution of extraterrestrial particles in marine sediments. Strong enrichment of redox-sensitive elements, particularly rhenium, up-section in the basal postglacial cap carbonates, may indicate a return to more fully oxygenated seawater in the aftermath of the Marinoan snowball earth. Sections dominated by hydrogenous osmium indicate increasing submarine hydrothermal sources and/or continental inputs that are increasingly dominated by young mantle-derived rocks after deglaciation. Sedimentation rate estimates for the basal cap carbonates yield surprisingly slow rates of a few centimeters per thousand years. This study highlights the importance of using sedimentary rock samples that represent sufficiently large area-time products to properly sample extraterrestrial particles representatively, and demonstrates the value of using multiple tracers of extraterrestrial matter.

  17. Diversity and Community Can Coexist.

    PubMed

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2016-03-01

    We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.

  18. TCR-engineered, customized, antitumor T cells for cancer immunotherapy: advantages and limitations.

    PubMed

    Chhabra, Arvind

    2011-01-05

    The clinical outcome of the traditional adoptive cancer immunotherapy approaches involving the administration of donor-derived immune effectors, expanded ex vivo, has not met expectations. This could be attributed, in part, to the lack of sufficient high-avidity antitumor T-cell precursors in most cancer patients, poor immunogenicity of cancer cells, and the technological limitations to generate a sufficiently large number of tumor antigen-specific T cells. In addition, the host immune regulatory mechanisms and immune homeostasis mechanisms, such as activation-induced cell death (AICD), could further limit the clinical efficacy of the adoptively administered antitumor T cells. Since generation of a sufficiently large number of potent antitumor immune effectors for adoptive administration is critical for the clinical success of this approach, recent advances towards generating customized donor-specific antitumor-effector T cells by engrafting human peripheral blood-derived T cells with a tumor-associated antigen-specific transgenic T-cell receptor (TCR) are quite interesting. This manuscript provides a brief overview of the TCR engineering-based cancer immunotherapy approach, its advantages, and the current limitations.

  19. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  20. The NKI-Rockland Sample: A Model for Accelerating the Pace of Discovery Science in Psychiatry

    PubMed Central

    Nooner, Kate Brody; Colcombe, Stanley J.; Tobe, Russell H.; Mennes, Maarten; Benedict, Melissa M.; Moreno, Alexis L.; Panek, Laura J.; Brown, Shaquanna; Zavitz, Stephen T.; Li, Qingyang; Sikka, Sharad; Gutman, David; Bangaru, Saroja; Schlachter, Rochelle Tziona; Kamiel, Stephanie M.; Anwar, Ayesha R.; Hinz, Caitlin M.; Kaplan, Michelle S.; Rachlin, Anna B.; Adelsberg, Samantha; Cheung, Brian; Khanuja, Ranjit; Yan, Chaogan; Craddock, Cameron C.; Calhoun, Vincent; Courtney, William; King, Margaret; Wood, Dylan; Cox, Christine L.; Kelly, A. M. Clare; Di Martino, Adriana; Petkova, Eva; Reiss, Philip T.; Duan, Nancy; Thomsen, Dawn; Biswal, Bharat; Coffey, Barbara; Hoptman, Matthew J.; Javitt, Daniel C.; Pomara, Nunzio; Sidtis, John J.; Koplewicz, Harold S.; Castellanos, Francisco Xavier; Leventhal, Bennett L.; Milham, Michael P.

    2012-01-01

    The National Institute of Mental Health strategic plan for advancing psychiatric neuroscience calls for an acceleration of discovery and the delineation of developmental trajectories for risk and resilience across the lifespan. To attain these objectives, sufficiently powered datasets with broad and deep phenotypic characterization, state-of-the-art neuroimaging, and genetic samples must be generated and made openly available to the scientific community. The enhanced Nathan Kline Institute-Rockland Sample (NKI-RS) is a response to this need. NKI-RS is an ongoing, institutionally centered endeavor aimed at creating a large-scale (N > 1000), deeply phenotyped, community-ascertained, lifespan sample (ages 6–85 years old) with advanced neuroimaging and genetics. These data will be publically shared, openly, and prospectively (i.e., on a weekly basis). Herein, we describe the conceptual basis of the NKI-RS, including study design, sampling considerations, and steps to synchronize phenotypic and neuroimaging assessment. Additionally, we describe our process for sharing the data with the scientific community while protecting participant confidentiality, maintaining an adequate database, and certifying data integrity. The pilot phase of the NKI-RS, including challenges in recruiting, characterizing, imaging, and sharing data, is discussed while also explaining how this experience informed the final design of the enhanced NKI-RS. It is our hope that familiarity with the conceptual underpinnings of the enhanced NKI-RS will facilitate harmonization with future data collection efforts aimed at advancing psychiatric neuroscience and nosology. PMID:23087608

  1. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  2. Drought in the Horn of Africa: attribution of a damaging and repeating extreme event

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Otto, Friederike; Mitchell, Daniel; Dadson, Simon; Jones, Richard

    2015-04-01

    We have applied detection and attribution techniques to the severe drought that hit the Horn of Africa in 2014. The short rains failed in late 2013 in Kenya, South Sudan, Somalia and southern Ethiopia, leading to a very dry growing season January to March 2014, and subsequently to the current drought in many agricultural areas of the sub-region. We have made use of the weather@home project, which uses publicly-volunteered distributed computing to provide a large ensemble of simulations sufficient to sample regional climate uncertainty. Based on this, we have estimated the occurrence rates of the kinds of the rare and extreme events implicated in this large-scale drought. From land surface model runs based on these ensemble simulations, we have estimated the impacts of climate anomalies during this period and therefore we can reliably identify some factors of the ongoing drought as attributable to human-induced climate change. The UNFCCC's Adaptation Fund is attempting to support projects that bring about an adaptation to "the adverse effects of climate change", but in order to formulate such projects we need a much clearer way to assess how much climate change is human-induced and how much is a consequence of climate anomalies and large-scale teleconnections, which can only be provided by robust attribution techniques.

  3. 76 FR 62044 - Alternative Testing Requirements for Small Batch Manufacturers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... every manufacturer of a children's product that is subject to a children's product safety rule shall submit sufficient samples of the children's product, or samples that are identical in all material... compliance with such children's product safety rule. Further, section 14(i)(2) requires continued testing of...

  4. Identification of Novel Prostate Cancer-Causitive Gene Mutations by Representational Difference Analysis of Microdissected Prostate Cancer

    DTIC Science & Technology

    2001-03-01

    paired samples of microdissected benign and malignant prostate epithelium. The resulting subtraction products were cloned and screened in Southern blots... benign and malignant human prostate cancer. Data is given to show that microdissected tissue samples retain RNA of sufficient quality to perform gene

  5. Analysis of variograms with various sample sizes from a multispectral image

    USDA-ARS?s Scientific Manuscript database

    Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...

  6. ESTIMATION OF TOTAL DISSOLVED NITRATE LOAD IN NATURAL STREAM FLOWS USING AN IN-STREAM MONITOR

    EPA Science Inventory

    Estuaries respond rapidly to rain events and the nutrients carried by inflowing rivers such that discrete samples at weekly or monthly intervals are inadequate to catch the maxima and minima in nutrient variability. To acquire data with sufficient sampling frequency to realistica...

  7. Sufficient condition for a finite-time singularity in a high-symmetry Euler flow: Analysis and statistics

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    1996-08-01

    A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.

  8. Testing for post-copulatory selection for major histocompatibility complex genotype in a semi-free-ranging primate population.

    PubMed

    Setchell, Joanna M; Abbott, Kristin M; Gonzalez, Jean-Paul; Knapp, Leslie A

    2013-10-01

    A large body of evidence suggests that major histocompatibility complex (MHC) genotype influences mate choice. However, few studies have investigated MHC-mediated post-copulatory mate choice under natural, or even semi-natural, conditions. We set out to explore this question in a large semi-free-ranging population of mandrills (Mandrillus sphinx) using MHC-DRB genotypes for 127 parent-offspring triads. First, we showed that offspring MHC heterozygosity correlates positively with parental MHC dissimilarity suggesting that mating among MHC dissimilar mates is efficient in increasing offspring MHC diversity. Second, we compared the haplotypes of the parental dyad with those of the offspring to test whether post-copulatory sexual selection favored offspring with two different MHC haplotypes, more diverse gamete combinations, or greater within-haplotype diversity. Limited statistical power meant that we could only detect medium or large effect sizes. Nevertheless, we found no evidence for selection for heterozygous offspring when parents share a haplotype (large effect size), genetic dissimilarity between parental haplotypes (we could detect an odds ratio of ≥1.86), or within-haplotype diversity (medium-large effect). These findings suggest that comparing parental and offspring haplotypes may be a useful approach to test for post-copulatory selection when matings cannot be observed, as is the case in many study systems. However, it will be extremely difficult to determine conclusively whether post-copulatory selection mechanisms for MHC genotype exist, particularly if the effect sizes are small, due to the difficulty in obtaining a sufficiently large sample. © 2013 Wiley Periodicals, Inc.

  9. Monitoring conservation success in a large oak woodland landscape

    Treesearch

    Rich Reiner; Emma Underwood; John-O Niles

    2002-01-01

    Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...

  10. Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture

    NASA Astrophysics Data System (ADS)

    Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo

    2018-05-01

    The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.

  11. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  12. System for producing a uniform rubble bed for in situ processes

    DOEpatents

    Galloway, T.R.

    1983-07-05

    A method and a cutter are disclosed for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head has a hollow body with a generally circular base and sloping upper surface. A hollow shaft extends from the hollow body. Cutter teeth are mounted on the upper surface of the body and relatively small holes are formed in the body between the cutter teeth. Relatively large peripheral flutes around the body allow material to drop below the drill head. A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale. 4 figs.

  13. Human blood RNA stabilization in samples collected and transported for a large biobank

    PubMed Central

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  14. Youth Actuarial Risk Assessment Tool (Y-ARAT): The development of an actuarial risk assessment instrument for predicting general offense recidivism on the basis of police records.

    PubMed

    van der Put, Claudia E

    2014-06-01

    Estimating the risk for recidivism is important for many areas of the criminal justice system. In the present study, the Youth Actuarial Risk Assessment Tool (Y-ARAT) was developed for juvenile offenders based solely on police records, with the aim to estimate the risk of general recidivism among large groups of juvenile offenders by police officers without clinical expertise. On the basis of the Y-ARAT, juvenile offenders are classified into five risk groups based on (combinations of) 10 variables including different types of incidents in which the juvenile was a suspect, total number of incidents in which the juvenile was a suspect, total number of other incidents, total number of incidents in which co-occupants at the youth's address were suspects, gender, and age at first incident. The Y-ARAT was developed on a sample of 2,501 juvenile offenders and validated on another sample of 2,499 juvenile offenders, showing moderate predictive accuracy (area under the receiver-operating-characteristic curve = .73), with little variation between the construction and validation sample. The predictive accuracy of the Y-ARAT was considered sufficient to justify its use as a screening instrument for the police. © The Author(s) 2013.

  15. Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams

    DOE PAGES

    Bang, W.; Albright, B. J.; Bradley, P. A.; ...

    2015-09-22

    With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has beenmore » unavailable to date. We have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.« less

  16. Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams.

    PubMed

    Bang, W; Albright, B J; Bradley, P A; Gautier, D C; Palaniyappan, S; Vold, E L; Santiago Cordoba, M A; Hamilton, C E; Fernández, J C

    2015-09-22

    With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has been unavailable to date. Here we have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.

  17. SHARDS: Survey for High-z Absorption Red & Dead Sources

    NASA Astrophysics Data System (ADS)

    Pérez-González, P. G.; Cava, A.

    2013-05-01

    SHARDS, an ESO/GTC Large Program, is an ultra-deep (26.5 mag) spectro-photometric survey with GTC/OSIRIS designed to select and study massive passively evolving galaxies at z=1.0-2.3 in the GOODS-N field using a set of 24 medium-band filters (FWHM~17 nm) covering the 500-950 nm spectral range. Our observing strategy has been planned to detect, for z>1 sources, the prominent Mg absorption feature (at rest-frame ~280 nm), a distinctive, necessary, and sufficient feature of evolved stellar populations (older than 0.5 Gyr). These observations are being used to: (1) derive for the first time an unbiased sample of high-z quiescent galaxies, which extends to fainter magnitudes the samples selected with color techniques and spectroscopic surveys; (2) derive accurate ages and stellar masses based on robust measurements of spectral features such as the Mg_UV or D(4000) indices; (3) measure their redshift with an accuracy Δz/(1+z)<0.02; and (4) study emission-line galaxies (starbursts and AGN) up to very high redshifts. The well-sampled optical SEDs provided by SHARDS for all sources in the GOODS-N field are a valuable complement for current and future surveys carried out with other telescopes (e.g., Spitzer, HST, and Herschel).

  18. Energy research with neutrons (ErwiN) and installation of a fast neutron powder diffraction option at the MLZ, Germany1

    PubMed Central

    Mühlbauer, Martin J.

    2018-01-01

    The need for rapid data collection and studies of small sample volumes in the range of cubic millimetres are the main driving forces for the concept of a new high-throughput monochromatic diffraction instrument at the Heinz Maier-Leibnitz Zentrum (MLZ), Germany. A large region of reciprocal space will be accessed by a detector with sufficient dynamic range and microsecond time resolution, while allowing for a variety of complementary sample environments. The medium-resolution neutron powder diffraction option for ‘energy research with neutrons’ (ErwiN) at the high-flux FRM II neutron source at the MLZ is foreseen to meet future demand. ErwiN will address studies of energy-related systems and materials with respect to their structure and uniformity by means of bulk and spatially resolved neutron powder diffraction. A set of experimental options will be implemented, enabling time-resolved studies, rapid parametric measurements as a function of external parameters and studies of small samples using an adapted radial collimator. The proposed powder diffraction option ErwiN will bridge the gap in functionality between the high-resolution powder diffractometer SPODI and the time-of-flight diffractometers POWTEX and SAPHiR at the MLZ. PMID:29896055

  19. Perturbation of nuclear spin polarizations in solid state NMR of nitroxide-doped samples by magic-angle spinning without microwaves.

    PubMed

    Thurber, Kent R; Tycko, Robert

    2014-05-14

    We report solid state (13)C and (1)H nuclear magnetic resonance (NMR) experiments with magic-angle spinning (MAS) on frozen solutions containing nitroxide-based paramagnetic dopants that indicate significant perturbations of nuclear spin polarizations without microwave irradiation. At temperatures near 25 K, (1)H and cross-polarized (13)C NMR signals from (15)N,(13)C-labeled L-alanine in trinitroxide-doped glycerol/water are reduced by factors as large as six compared to signals from samples without nitroxide doping. Without MAS or at temperatures near 100 K, differences between signals with and without nitroxide doping are much smaller. We attribute most of the reduction of NMR signals under MAS near 25 K to nuclear spin depolarization through the cross-effect dynamic nuclear polarization mechanism, in which three-spin flips drive nuclear polarizations toward equilibrium with spin polarization differences between electron pairs. When T1e is sufficiently long relative to the MAS rotation period, the distribution of electron spin polarization across the nitroxide electron paramagnetic resonance lineshape can be very different from the corresponding distribution in a static sample at thermal equilibrium, leading to the observed effects. We describe three-spin and 3000-spin calculations that qualitatively reproduce the experimental observations.

  20. Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bang, W.; Albright, B. J.; Bradley, P. A.

    With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has beenmore » unavailable to date. We have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.« less

  1. Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams

    NASA Astrophysics Data System (ADS)

    Bang, W.; Albright, B. J.; Bradley, P. A.; Gautier, D. C.; Palaniyappan, S.; Vold, E. L.; Cordoba, M. A. Santiago; Hamilton, C. E.; Fernández, J. C.

    2015-09-01

    With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has been unavailable to date. Here we have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.

  2. The effect of sampling rate on interpretation of the temporal characteristics of radiative and convective heating in wildland flames

    Treesearch

    David Frankman; Brent W. Webb; Bret W. Butler; Daniel Jimenez; Michael Harrington

    2012-01-01

    Time-resolved radiative and convective heating measurements were collected on a prescribed burn in coniferous fuels at a sampling frequency of 500 Hz. Evaluation of the data in the time and frequency domain indicate that this sampling rate was sufficient to capture the temporal fluctuations of radiative and convective heating. The convective heating signal contained...

  3. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  4. An Efficient Referencing And Sample Positioning System To Investigate Heterogeneous Substances With Combined Microfocused Synchrotron X-ray Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spangenberg, Thomas; Goettlicher, Joerg; Steininger, Ralph

    2009-01-29

    A referencing and sample positioning system has been developed to transfer object positions measured with an offline microscope to a synchrotron experimental station. The accuracy should be sufficient to deal with heterogeneous samples on micrometer scale. Together with an online fluorescence mapping visualisation the optical alignment helps to optimize measuring procedures for combined microfocused X-ray techniques.

  5. Is the extraction by Whatman FTA filter matrix technology and sequencing of large ribosomal subunit D1-D2 region sufficient for identification of clinical fungi?

    PubMed

    Kiraz, Nuri; Oz, Yasemin; Aslan, Huseyin; Erturan, Zayre; Ener, Beyza; Akdagli, Sevtap Arikan; Muslumanoglu, Hamza; Cetinkaya, Zafer

    2015-10-01

    Although conventional identification of pathogenic fungi is based on the combination of tests evaluating their morphological and biochemical characteristics, they can fail to identify the less common species or the differentiation of closely related species. In addition these tests are time consuming, labour-intensive and require experienced personnel. We evaluated the feasibility and sufficiency of DNA extraction by Whatman FTA filter matrix technology and DNA sequencing of D1-D2 region of the large ribosomal subunit gene for identification of clinical isolates of 21 yeast and 160 moulds in our clinical mycology laboratory. While the yeast isolates were identified at species level with 100% homology, 102 (63.75%) clinically important mould isolates were identified at species level, 56 (35%) isolates at genus level against fungal sequences existing in DNA databases and two (1.25%) isolates could not be identified. Consequently, Whatman FTA filter matrix technology was a useful method for extraction of fungal DNA; extremely rapid, practical and successful. Sequence analysis strategy of D1-D2 region of the large ribosomal subunit gene was found considerably sufficient in identification to genus level for the most clinical fungi. However, the identification to species level and especially discrimination of closely related species may require additional analysis. © 2015 Blackwell Verlag GmbH.

  6. Proton velocity ring-driven instabilities and their dependence on the ring speed: Linear theory

    NASA Astrophysics Data System (ADS)

    Min, Kyungguk; Liu, Kaijun; Gary, S. Peter

    2017-08-01

    Linear dispersion theory is used to study the Alfvén-cyclotron, mirror and ion Bernstein instabilities driven by a tenuous (1%) warm proton ring velocity distribution with a ring speed, vr, varying between 2vA and 10vA, where vA is the Alfvén speed. Relatively cool background protons and electrons are assumed. The modeled ring velocity distributions are unstable to both the Alfvén-cyclotron and ion Bernstein instabilities whose maximum growth rates are roughly a linear function of the ring speed. The mirror mode, which has real frequency ωr=0, becomes the fastest growing mode for sufficiently large vr/vA. The mirror and Bernstein instabilities have maximum growth at propagation oblique to the background magnetic field and become more field-aligned with an increasing ring speed. Considering its largest growth rate, the mirror mode, in addition to the Alfvén-cyclotron mode, can cause pitch angle diffusion of the ring protons when the ring speed becomes sufficiently large. Moreover, because the parallel phase speed, v∥ph, becomes sufficiently small relative to vr, the low-frequency Bernstein waves can also aid the pitch angle scattering of the ring protons for large vr. Potential implications of including these two instabilities at oblique propagation on heliospheric pickup ion dynamics are discussed.

  7. Conservation genetics of maned wolves in a highly impacted area of the Brazilian Cerrado biome.

    PubMed

    Lion, Marília Bruzzi; Eizirik, Eduardo; Garda, Adrian Antonio; Fontoura-Rodrigues, Manoel Ludwig da; Rodrigues, Flávio Henrique Guimarães; Marinho-Filho, Jader Soares

    2011-03-01

    Maned wolves are large canids currently considered vulnerable to extinction due to habitat loss. They are still commonly found within the urban mesh inside the Brazilian Federal District (Distrito Federal--DF), in nearby Protected Areas (PAs), and in surrounding farms. We evaluated the genetic diversity of maned wolves in three PAs of the DF, using both invasive and noninvasive techniques to obtain DNA that was later amplified for five microsatellite markers. We sampled 23 wolves: 10 with the noninvasive method, three captured in traps, six road-killed, and four rescued in urban areas. In Águas Emendadas Ecological Station (ESECAE) we also used samples from six specimens captured between 1997 and 1998 for a temporal comparison. For maned wolves, non-invasive techniques are affordable and easier to conduct in the field, while laboratory costs are much lower for invasive samples. Hence, a sampling strategy combining both techniques may provide an interesting approach for molecular ecology studies requiring comprehensive coverage of local individuals. On the basis of such integrated sampling scheme, our analyses indicated that none of the investigated populations currently present deviations from Hardy-Weinberg expectations or indication of inbreeding. Furthermore, in ESECAE there was no reduction in genetic diversity during the last 9 years. Overall, maned wolves did not present evidence of genetic structuring among the three sampled PAs. These results thus indicate that individual exchange among PAs is still occurring at sufficient rates to avoid differentiation, and/or that the recent fragmentation in the region has not yet produced measurable effects in the genetic diversity of maned wolves.

  8. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  9. Proposed BioRepository platform solution for the ALS research community.

    PubMed

    Sherman, Alex; Bowser, Robert; Grasso, Daniela; Power, Breen; Milligan, Carol; Jaffa, Matthew; Cudkowicz, Merit

    2011-01-01

    ALS is a rare disorder whose cause and pathogenesis is largely unknown ( 1 ). There is a recognized need to develop biomarkers for ALS to better understand the disease, expedite diagnosis and to facilitate therapy development. Collaboration is essential to obtain a sufficient number of samples to allow statistically meaningful studies. The availability of high quality biological specimens for research purposes requires the development of standardized methods for collection, long-term storage, retrieval and distribution of specimens. The value of biological samples to scientists and clinicians correlates with the completeness and relevance of phenotypical and clinical information associated with the samples ( 2 , 3 ). While developing a secure Web-based system to manage an inventory of multi-site BioRepositories, algorithms were implemented to facilitate ad hoc parametric searches across heterogeneous data sources that contain data from clinical trials and research studies. A flexible schema for a barcode label was introduced to allow association of samples to these data. The ALSBank™ BioRepository platform solution for managing biological samples and associated data is currently deployed by the Northeast ALS Consortium (NEALS). The NEALS Consortium and the Massachusetts General Hospital (MGH) Neurology Clinical Trials Unit (NCTU) support a network of multiple BioBanks, thus allowing researchers to take advantage of a larger specimen collection than they might have at an individual institution. Standard operating procedures are utilized at all collection sites to promote common practices for biological sample integrity, quality control and associated clinical data. Utilizing this platform, we have created one of the largest virtual collections of ALS-related specimens available to investigators studying ALS.

  10. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  11. A robust, simple, high-throughput technique for time-resolved plant volatile analysis in field experiments

    PubMed Central

    Kallenbach, Mario; Oh, Youngjoo; Eilers, Elisabeth J.; Veit, Daniel; Baldwin, Ian T.; Schuman, Meredith C.

    2014-01-01

    Summary Plant volatiles (PVs) mediate interactions between plants and arthropods, microbes, and other plants, and are involved in responses to abiotic stress. PV emissions are therefore influenced by many environmental factors, including herbivore damage, microbial invasion, and cues from neighboring plants, but also light regime, temperature, humidity, and nutrient availability. Thus an understanding of the physiological and ecological functions of PVs must be grounded in measurements reflecting PV emissions under natural conditions. However, PVs are usually sampled in the artificial environments of laboratories or climate chambers. Sampling of PVs in natural environments is difficult, limited by the need to transport, maintain, and power instruments, or use expensive sorbent devices in replicate. Ideally, PVs should be measured in natural settings with high replication, spatiotemporal resolution, and sensitivity, and at modest costs. Polydimethysiloxane (PDMS), a sorbent commonly used for PV sampling, is available as silicone tubing (ST) for as little as 0.60 €/m (versus 100-550 € apiece for standard PDMS sorbent devices). Small (mm-cm) ST pieces (STs) can be placed in any environment and used for headspace sampling with little manipulation of the organism or headspace. STs have sufficiently fast absorption kinetics and large capacity to sample plant headspaces on a timescale of minutes to hours, and thus can produce biologically meaningful “snapshots” of PV blends. When combined with thermal desorption (TD)-GC-MS analysis – a 40-year-old and widely available technology – STs yield reproducible, sensitive, spatiotemporally resolved, quantitative data from headspace samples taken in natural environments. PMID:24684685

  12. Cross-cultural perspectives on physician and lay models of the common cold.

    PubMed

    Baer, Roberta D; Weller, Susan C; de Alba García, Javier García; Rocha, Ana L Salcedo

    2008-06-01

    We compare physicians and laypeople within and across cultures, focusing on similarities and differences across samples, to determine whether cultural differences or lay-professional differences have a greater effect on explanatory models of the common cold. Data on explanatory models for the common cold were collected from physicians and laypeople in South Texas and Guadalajara, Mexico. Structured interview materials were developed on the basis of open-ended interviews with samples of lay informants at each locale. A structured questionnaire was used to collect information from each sample on causes, symptoms, and treatments for the common cold. Consensus analysis was used to estimate the cultural beliefs for each sample. Instead of systematic differences between samples based on nationality or level of professional training, all four samples largely shared a single-explanatory model of the common cold, with some differences on subthemes, such as the role of hot and cold forces in the etiology of the common cold. An evaluation of our findings indicates that, although there has been conjecture about whether cultural or lay-professional differences are of greater importance in understanding variation in explanatory models of disease and illness, systematic data collected on community and professional beliefs indicate that such differences may be a function of the specific illness. Further generalizations about lay-professional differences need to be based on detailed data for a variety of illnesses, to discern patterns that may be present. Finally, a systematic approach indicates that agreement across individual explanatory models is sufficient to allow for a community-level explanatory model of the common cold.

  13. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  14. Detection of Lipitor counterfeits: a comparison of NIR and Raman spectroscopy in combination with chemometrics.

    PubMed

    de Peinder, P; Vredenbregt, M J; Visser, T; de Kaste, D

    2008-08-05

    Research has been carried on the feasibility of near infrared (NIR) and Raman spectroscopy as rapid screening methods to discriminate between genuine and counterfeits of the cholesterol-lowering medicine Lipitor. Classification, based on partial least squares discriminant analysis (PLS-DA) models, appears to be successful for both spectroscopic techniques, irrespective of whether atorvastatine or lovastatine has been used as the active pharmaceutical ingredient (API). The discriminative power of the NIR model, in particular, largely relies on the spectral differences of the tablet matrix. This is due to the relative large sample volume that is probed with NIR and the strong spectroscopic activity of the excipients. PLS-DA models based on NIR or Raman spectra can also be applied to distinguish between atorvastatine and lovastatine as the API used in the counterfeits tested in this study. A disadvantage of Raman microscopy for this type of analysis is that it is primarily a surface technique. As a consequence spectra of the coating and the tablet core might differ. Besides, spectra may change with the position of the laser in case the sample is inhomogeneous. However, the robustness of the PLS-DA models turned out to be sufficiently large to allow a reliable discrimination. Principal component analysis (PCA) of the spectra revealed that the conditions, at which tablets have been stored, affect the NIR data. This effect is attributed to the adsorption of water from the atmosphere after unpacking from the blister. It implies that storage conditions should be taken into account when the NIR technique is used for discriminating purposes. However, in this study both models based on NIR spectra and Raman data enabled reliable discrimination between genuine and counterfeited Lipitor tablets, regardless of their storage conditions.

  15. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    NASA Astrophysics Data System (ADS)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of these conspicuously large single-celled eukaryotes.

  16. Seismic sample areas defined from incomplete catalogues: an application to the Italian territory

    NASA Astrophysics Data System (ADS)

    Mulargia, F.; Tinti, S.

    1985-11-01

    The comprehensive understanding of earthquake source-physics under real conditions requires the study not of single faults as separate entities but rather of a seismically active region as a whole, accounting for the interaction among different structures. We define "seismic sample area" the most convenient region to be used as a natural laboratory for the study of seismic source physics. This coincides with the region where the average large magnitude seismicity is the highest. To this end, time and space future distributions of large earthquakes are to be estimated. Using catalog seismicity as an input, the rate of occurrence is not constant but appears generally biased by incompleteness in some parts of the catalog and possible nonstationarities in seismic activity. We present a statistical procedure which is capable, under a few mild assumptions, of both detecting nonstationarities in seismicity and finding the incomplete parts of a seismic catalog. The procedure is based on Kolmogorov-Smirnov nonparametric statistics, and can be applied without a priori assuming the parent distribution of the events. The efficiency of this procedure allows the analysis of small data sets. An application to the Italian territory is presented, using the most recent version of the ENEL seismic catalog. Seismic activity takes place in six well defined areas but only five of them have a number of events sufficient for analysis. Barring a few exceptions, seismicity is found stationary throughout the whole catalog span 1000-1980. The eastern Alps region stands out as the best "sample area", with the highest average probability of event occurrence per time and area unit. Final objective of this characterization is to stimulate a program of intensified research.

  17. Immunochip Analyses of Epistasis in Rheumatoid Arthritis Confirm Multiple Interactions within MHC and Suggest Novel Non-MHC Epistatic Signals.

    PubMed

    Wei, Wen-Hua; Loh, Chia-Yin; Worthington, Jane; Eyre, Stephen

    2016-05-01

    Studying statistical gene-gene interactions (epistasis) has been limited by the difficulties in performance, both statistically and computationally, in large enough sample numbers to gain sufficient power. Three large Immunochip datasets from cohort samples recruited in the United Kingdom, United States, and Sweden with European ancestry were used to examine epistasis in rheumatoid arthritis (RA). A full pairwise search was conducted in the UK cohort using a high-throughput tool and the resultant significant epistatic signals were tested for replication in the United States and Swedish cohorts. A forward selection approach was applied to remove redundant signals, while conditioning on the preidentified additive effects. We detected abundant genome-wide significant (p < 1.0e-13) epistatic signals, all within the MHC region. These signals were reduced substantially, but a proportion remained significant (p < 1.0e-03) in conditional tests. We identified 11 independent epistatic interactions across the entire MHC, each explaining on average 0.12% of the phenotypic variance, nearly all replicated in both replication cohorts. We also identified non-MHC epistatic interactions between RA susceptible loci LOC100506023 and IRF5 with Immunochip-wide significance (p < 1.1e-08) and between 2 neighboring single-nucleotide polymorphism near PTPN22 that were in low linkage disequilibrium with independent interaction (p < 1.0e-05). Both non-MHC epistatic interactions were statistically replicated with a similar interaction pattern in the US cohort only. There are multiple but relatively weak interactions independent of the additive effects in RA and a larger sample number is required to confidently assign additional non-MHC epistasis.

  18. Distribution pattern of benthic invertebrates in Danish estuaries: The use of Taylor's power law as a species-specific indicator of dispersion and behavior

    NASA Astrophysics Data System (ADS)

    Kristensen, Erik; Delefosse, Matthieu; Quintana, Cintia O.; Banta, Gary T.; Petersen, Hans Christian; Jørgensen, Bent

    2013-03-01

    The lack of a common statistical approach describing the distribution and dispersion pattern of marine benthic animals has often hampered the comparability among studies. The purpose of this study is therefore to apply an alternative approach, Taylor's power law, to data on spatial and temporal distribution of 9 dominating benthic invertebrate species from two study areas, the estuaries Odense Fjord and Roskilde Fjord, Denmark. The slope (b) obtained from the power relationship of sample variance (s2) versus mean (μ) appears to be species-specific and independent of location and time. It ranges from a low of ~ 1 for large-bodied (> 1 mg AFDW) species (e.g. Marenzelleria viridis, Nereis diversicolor) to a high of 1.6-1.9 for small-bodied (< 1 mg AFDW) species (e.g. Pygospio elegans and Tubificoides benedii). Accordingly, b is apparently a valuable species-specific dispersion index based on biological factors such as behavior and intraspecific interactions. Thus, at the examined spatial scale, the more intense intraspecific interactions (e.g. territoriality) cause less aggregated distribution patterns among large- than small-bodied invertebrates. The species-specific interactions seem sufficiently strong to override environmental influences (e.g. water depth and sediment type). The strong linear relationship between the slope b and intercept log(a) from the power relationship is remarkably similar for all surveys providing a common slope of - 1.63 with the present sampling approach. We suggest that this relationship is an inherent characteristic of Taylor's power law, and that b as a dispersion index may be biased by e.g. sampling errors when this relationship is weak. The correlation strength between b and log(a) could therefore be envisioned as a data quality check.

  19. Variations in turbidity in streams of the Bull Run Watershed, Oregon 1989-90

    USGS Publications Warehouse

    LaHusen, Richard G.

    1994-01-01

    In this study, turbidity is used to help explain spatial and temporal patterns of erosion and sediment transport.Automated turbidity sampling in streams in the Bull Run watershed during water years 1989 and 1990, showed turbidity levels, in general, are remarkably low, with levels below 1 NTU (nephelometric turbidity unit) about 90 percent of the time. However, ephemeral increases in turbidity in streams of the Bull Run watershed occur in direct response to storms. Turbidity is caused by abundant organic particles as well as by materials eroded from unconsolidated geologic materials located along roads, stream channels, or stream banks. Seasonal and within-storm decreases in turbidity are attributed to depletion of accumulated particle supplies. During winter storms, erosion caused by rainfall intensities greater than 0.25 inches in 3 hours is sufficient to increase stream turbidities from less than 1 NTU to as much as 100 NTUs. Large-scale storms or floods cause persistent effects because mass erosion or scour of channel armor increases available sediment supply.Spatial variability in turbidity is evident only during storms when erosion and sediment-transport processes are active. Parts of the Rhododendron Formation are particularly prone to channel and mass erosion during large storms. Eroding glacial deposits in sections of Log Creek affected by a 1964 dam-break flood also cause high stream turbidity relative to other streams in the watershed.Analysis of characteristics of magnetic minerals in sediment sources and deposits was unproductive as a means to identify source areas of suspended sediment because high concentrations of magnetite in all samples of the volcanic rocks masked differences of less magnetic minerals in the samples.

  20. Optimizing cord blood sample cryopreservation.

    PubMed

    Harris, David T

    2012-03-01

    Cord blood (CB) banking is becoming more and more commonplace throughout the medical community, both in the USA and elsewhere. It is now generally recognized that storage of CB samples in multiple aliquots is the preferred approach to banking because it allows the greatest number of uses of the sample. However, it is unclear which are the best methodologies for cryopreservation and storage of the sample aliquots. In the current study we analyzed variables that could affect these processes. CB were processed into mononuclear cells (MNC) and frozen in commercially available human serum albumin (HSA) or autologous CB plasma using cryovials of various sizes and cryobags. The bacteriophage phiX174 was used as a model virus to test for cross-contamination. We observed that cryopreservation of CB in HSA, undiluted autologous human plasma and 50% diluted plasma was equivalent in terms of cell recovery and cell viability. We also found that cryopreservation of CB samples in either cryovials or cryobags displayed equivalent thermal characteristics. Finally, we demonstrated that overwrapping the CB storage container in an impermeable plastic sheathing was sufficient to prevent cross-sample viral contamination during prolonged storage in the liquid phase of liquid nitrogen dewar storage. CB may be cryopreserved in either vials or bags without concern for temperature stability. Sample overwrapping is sufficient to prevent microbiologic contamination of the samples while in liquid-phase liquid nitrogen storage.

  1. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  2. Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes

    NASA Astrophysics Data System (ADS)

    Tsoy, A. S.; Snegirev, A. Yu.

    2015-09-01

    The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.

  3. Sampling methods for terrestrial amphibians and reptiles.

    Treesearch

    Paul Stephen Corn; R. Bruce Bury

    1990-01-01

    Methods described for sampling amphibians and reptiles in Douglas-fir forests in the Pacific Northwest include pitfall trapping, time-constrained collecting, and surveys of coarse woody debris. The herpetofauna of this region differ in breeding and nonbreeding habitats and vagility, so that no single technique is sufficient for a community study. A combination of...

  4. Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.

    Treesearch

    Andrew B. Carey; Scott P. Horton; Janice A. Reid

    1989-01-01

    Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...

  5. 29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...

  6. 29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...

  7. 29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...

  8. 29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...

  9. 29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...

  10. Precise stellar surface gravities from the time scales of convectively driven brightness variations

    PubMed Central

    Kallinger, Thomas; Hekker, Saskia; García, Rafael A.; Huber, Daniel; Matthews, Jaymie M.

    2016-01-01

    A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars’ surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA’s Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode. PMID:26767193

  11. Precise stellar surface gravities from the time scales of convectively driven brightness variations.

    PubMed

    Kallinger, Thomas; Hekker, Saskia; García, Rafael A; Huber, Daniel; Matthews, Jaymie M

    2016-01-01

    A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars' surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA's Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode.

  12. Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro

    2008-03-01

    We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powalka, Mathieu; Lançon, Ariane; Duc, Pierre-Alain

    Large samples of globular clusters (GC) with precise multi-wavelength photometry are becoming increasingly available and can be used to constrain the formation history of galaxies. We present the results of an analysis of Milky Way (MW) and Virgo core GCs based on 5 optical-near-infrared colors and 10 synthetic stellar population models. For the MW GCs, the models tend to agree on photometric ages and metallicities, with values similar to those obtained with previous studies. When used with Virgo core GCs, for which photometry is provided by the Next Generation Virgo cluster Survey (NGVS), the same models generically return younger ages.more » This is a consequence of the systematic differences observed between the locus occupied by Virgo core GCs and models in panchromatic color space. Only extreme fine-tuning of the adjustable parameters available to us can make the majority of the best-fit ages old. Although we cannot exclude that the formation history of the Virgo core may lead to more conspicuous populations of relatively young GCs than in other environments, we emphasize that the intrinsic properties of the Virgo GCs are likely to differ systematically from those assumed in the models. Thus, the large wavelength coverage and photometric quality of modern GC samples, such as those used here, is not by itself sufficient to better constrain the GC formation histories. Models matching the environment-dependent characteristics of GCs in multi-dimensional color space are needed to improve the situation.« less

  14. Systematic evaluation of a targeted gene capture sequencing panel for molecular diagnosis of retinitis pigmentosa.

    PubMed

    Huang, Hui; Chen, Yanhua; Chen, Huishuang; Ma, Yuanyuan; Chiang, Pei-Wen; Zhong, Jing; Liu, Xuyang; Asan; Wu, Jing; Su, Yan; Li, Xin; Deng, Jianlian; Huang, Yingping; Zhang, Xinxin; Li, Yang; Fan, Ning; Wang, Ying; Tang, Lihui; Shen, Jinting; Chen, Meiyan; Zhang, Xiuqing; Te, Deng; Banerjee, Santasree; Liu, Hui; Qi, Ming; Yi, Xin

    2018-01-01

    Inherited eye diseases are major causes of vision loss in both children and adults. Inherited eye diseases are characterized by clinical variability and pronounced genetic heterogeneity. Genetic testing may provide an accurate diagnosis for ophthalmic genetic disorders and allow gene therapy for specific diseases. A targeted gene capture panel was designed to capture exons of 283 inherited eye disease genes including 58 known causative retinitis pigmentosa (RP) genes. 180 samples were tested with this panel, 68 were previously tested by Sanger sequencing. Systematic evaluation of our method and comprehensive molecular diagnosis were carried on 99 RP patients. 96.85% targeted regions were covered by at least 20 folds, the accuracy of variants detection was 99.994%. In 4 of the 68 samples previously tested by Sanger sequencing, mutations of other diseases not consisting with the clinical diagnosis were detected by next-generation sequencing (NGS) not Sanger. Among the 99 RP patients, 64 (64.6%) were detected with pathogenic mutations, while in 3 patients, it was inconsistent between molecular diagnosis and their initial clinical diagnosis. After revisiting, one patient's clinical diagnosis was reclassified. In addition, 3 patients were found carrying large deletions. We have systematically evaluated our method and compared it with Sanger sequencing, and have identified a large number of novel mutations in a cohort of 99 RP patients. The results showed a sufficient accuracy of our method and suggested the importance of molecular diagnosis in clinical diagnosis.

  15. Characterization of etch pits found on a large-grain bulk niobium superconducting radio-frequency resonant cavity

    DOE PAGES

    Zhao, Xin; Ciovati, G.; Bieler, T. R.

    2010-12-15

    The performance of superconducting radio-frequency (SRF) resonant cavities made of bulk niobium is limited by nonlinear localized effects. Surface analysis of regions of higher power dissipation is thus of intense interest. Such areas (referred to as “hotspots”) were identified in a large-grain single-cell cavity that had been buffered-chemical polished and dissected for examination by high resolution electron microscopy, electron backscattered diffraction microscopy (EBSD), and optical microscopy. Pits with clearly discernible crystal facets were observed in both “hotspot” and “coldspot” specimens. The pits were found in-grain, at bicrystal boundaries, and on tricrystal junctions. They are interpreted as etch pits induced bymore » crystal defects (e.g. dislocations). All coldspots examined had a qualitatively lower density of etch pits or relatively smooth tricrystal boundary junctions. EBSD mapping revealed the crystal orientation surrounding the pits. Locations with high pit density are correlated with higher mean values of the local average misorientation angle distributions, indicating a higher geometrically necessary dislocation content. In addition, a survey of the samples by energy dispersive x-ray analysis did not show any significant contamination of the samples’ surface. In conclusion, the local magnetic field enhancement produced by the sharp-edge features observed on the samples is not sufficient to explain the observed degradation of the cavity quality factor, which starts at peak surface magnetic field as low as 20 mT.« less

  16. Systematic evaluation of a targeted gene capture sequencing panel for molecular diagnosis of retinitis pigmentosa

    PubMed Central

    Ma, Yuanyuan; Chiang, Pei-Wen; Zhong, Jing; Liu, Xuyang; Asan; Wu, Jing; Su, Yan; Li, Xin; Deng, Jianlian; Huang, Yingping; Zhang, Xinxin; Li, Yang; Fan, Ning; Wang, Ying; Tang, Lihui; Shen, Jinting; Chen, Meiyan; Zhang, Xiuqing; Te, Deng; Banerjee, Santasree; Liu, Hui; Qi, Ming; Yi, Xin

    2018-01-01

    Background Inherited eye diseases are major causes of vision loss in both children and adults. Inherited eye diseases are characterized by clinical variability and pronounced genetic heterogeneity. Genetic testing may provide an accurate diagnosis for ophthalmic genetic disorders and allow gene therapy for specific diseases. Methods A targeted gene capture panel was designed to capture exons of 283 inherited eye disease genes including 58 known causative retinitis pigmentosa (RP) genes. 180 samples were tested with this panel, 68 were previously tested by Sanger sequencing. Systematic evaluation of our method and comprehensive molecular diagnosis were carried on 99 RP patients. Results 96.85% targeted regions were covered by at least 20 folds, the accuracy of variants detection was 99.994%. In 4 of the 68 samples previously tested by Sanger sequencing, mutations of other diseases not consisting with the clinical diagnosis were detected by next-generation sequencing (NGS) not Sanger. Among the 99 RP patients, 64 (64.6%) were detected with pathogenic mutations, while in 3 patients, it was inconsistent between molecular diagnosis and their initial clinical diagnosis. After revisiting, one patient’s clinical diagnosis was reclassified. In addition, 3 patients were found carrying large deletions. Conclusions We have systematically evaluated our method and compared it with Sanger sequencing, and have identified a large number of novel mutations in a cohort of 99 RP patients. The results showed a sufficient accuracy of our method and suggested the importance of molecular diagnosis in clinical diagnosis. PMID:29641573

  17. A Versatile Mounting Method for Long Term Imaging of Zebrafish Development.

    PubMed

    Hirsinger, Estelle; Steventon, Ben

    2017-01-26

    Zebrafish embryos offer an ideal experimental system to study complex morphogenetic processes due to their ease of accessibility and optical transparency. In particular, posterior body elongation is an essential process in embryonic development by which multiple tissue deformations act together to direct the formation of a large part of the body axis. In order to observe this process by long-term time-lapse imaging it is necessary to utilize a mounting technique that allows sufficient support to maintain samples in the correct orientation during transfer to the microscope and acquisition. In addition, the mounting must also provide sufficient freedom of movement for the outgrowth of the posterior body region without affecting its normal development. Finally, there must be a certain degree in versatility of the mounting method to allow imaging on diverse imaging set-ups. Here, we present a mounting technique for imaging the development of posterior body elongation in the zebrafish D. rerio. This technique involves mounting embryos such that the head and yolk sac regions are almost entirely included in agarose, while leaving out the posterior body region to elongate and develop normally. We will show how this can be adapted for upright, inverted and vertical light-sheet microscopy set-ups. While this protocol focuses on mounting embryos for imaging for the posterior body, it could easily be adapted for the live imaging of multiple aspects of zebrafish development.

  18. Detecting regional patterns of changing CO2 flux in Alaska

    PubMed Central

    Parazoo, Nicholas C.; Wofsy, Steven C.; Koven, Charles D.; Sweeney, Colm; Lawrence, David M.; Lindaas, Jakob; Chang, Rachel Y.-W.; Miller, Charles E.

    2016-01-01

    With rapid changes in climate and the seasonal amplitude of carbon dioxide (CO2) in the Arctic, it is critical that we detect and quantify the underlying processes controlling the changing amplitude of CO2 to better predict carbon cycle feedbacks in the Arctic climate system. We use satellite and airborne observations of atmospheric CO2 with climatically forced CO2 flux simulations to assess the detectability of Alaskan carbon cycle signals as future warming evolves. We find that current satellite remote sensing technologies can detect changing uptake accurately during the growing season but lack sufficient cold season coverage and near-surface sensitivity to constrain annual carbon balance changes at regional scale. Airborne strategies that target regular vertical profile measurements within continental interiors are more sensitive to regional flux deeper into the cold season but currently lack sufficient spatial coverage throughout the entire cold season. Thus, the current CO2 observing network is unlikely to detect potentially large CO2 sources associated with deep permafrost thaw and cold season respiration expected over the next 50 y. Although continuity of current observations is vital, strategies and technologies focused on cold season measurements (active remote sensing, aircraft, and tall towers) and systematic sampling of vertical profiles across continental interiors over the full annual cycle are required to detect the onset of carbon release from thawing permafrost. PMID:27354511

  19. Augmentation of the IUE Ultraviolet Spectral Atlas

    NASA Astrophysics Data System (ADS)

    Wu, Chi-Chao

    IUE is the only and last satellite which will support a survey program to record the ultraviolet spectrum of a large number of bright normal stars. It is important to have a library of high quality low dispersion spectra of sufficient number of stars that provide good coverage in spectral type and luminosity class. Such a library is invaluable for stellar population synthesis of galaxies, studying the nature of distant galaxies, establishing a UV spectral classification system, providing comparison stars for interstellar extinction studies and for peculiar objects or binary systems, studying the effects of temperature, gravity and metallicity on stellar UV spectra, and as a teaching aid. We propose to continue observations of normal stars in order to provide (1) a stellar library as complete as practical, which will be able to support astronomical research by the scientific community long into the future, and (2) a sufficient sample of stars to guard against variability and peculiarity, and to allow a finite range of temperature, gravity, and metallicity in a given spectral type-luminosity class combination. Our primary goal is to collect the data and make them available to the community immediately (without claiming the 6-month proprietary right). The data will be published in the IUE Newsletter as soon as practical, and the data will be prepared for distribution by the IUE Observatory and the NSSDC.

  20. A numerical study of Coulomb interaction effects on 2D hopping transport.

    PubMed

    Kinkhabwala, Yusuf A; Sverdlov, Viktor A; Likharev, Konstantin K

    2006-02-15

    We have extended our supercomputer-enabled Monte Carlo simulations of hopping transport in completely disordered 2D conductors to the case of substantial electron-electron Coulomb interaction. Such interaction may not only suppress the average value of hopping current, but also affect its fluctuations rather substantially. In particular, the spectral density S(I)(f) of current fluctuations exhibits, at sufficiently low frequencies, a 1/f-like increase which approximately follows the Hooge scaling, even at vanishing temperature. At higher f, there is a crossover to a broad range of frequencies in which S(I)(f) is nearly constant, hence allowing characterization of the current noise by the effective Fano factor [Formula: see text]. For sufficiently large conductor samples and low temperatures, the Fano factor is suppressed below the Schottky value (F = 1), scaling with the length L of the conductor as F = (L(c)/L)(α). The exponent α is significantly affected by the Coulomb interaction effects, changing from α = 0.76 ± 0.08 when such effects are negligible to virtually unity when they are substantial. The scaling parameter L(c), interpreted as the average percolation cluster length along the electric field direction, scales as [Formula: see text] when Coulomb interaction effects are negligible and [Formula: see text] when such effects are substantial, in good agreement with estimates based on the theory of directed percolation.

  1. Detecting regional patterns of changing CO 2 flux in Alaska

    DOE PAGES

    Parazoo, Nicholas C.; Commane, Roisin; Wofsy, Steven C.; ...

    2016-06-27

    With rapid changes in climate and the seasonal amplitude of carbon dioxide (CO 2) in the Arctic, it is critical that we detect and quantify the underlying processes controlling the changing amplitude of CO 2 to better predict carbon cycle feedbacks in the Arctic climate system. We use satellite and airborne observations of atmospheric CO 2 with climatically forced CO 2 flux simulations to assess the detectability of Alaskan carbon cycle signals as future warming evolves. We find that current satellite remote sensing technologies can detect changing uptake accurately during the growing season but lack sufficient cold season coverage andmore » near-surface sensitivity to constrain annual carbon balance changes at regional scale. Airborne strategies that target regular vertical profile measurements within continental interiors are more sensitive to regional flux deeper into the cold season but currently lack sufficient spatial coverage throughout the entire cold season. Thus, the current CO 2 observing network is unlikely to detect potentially large CO 2 sources associated with deep permafrost thaw and cold season respiration expected over the next 50 y. In conclusion, although continuity of current observations is vital, strategies and technologies focused on cold season measurements (active remote sensing, aircraft, and tall towers) and systematic sampling of vertical profiles across continental interiors over the full annual cycle are required to detect the onset of carbon release from thawing permafrost.« less

  2. The microwave radiometer spacecraft: A design study

    NASA Technical Reports Server (NTRS)

    Wright, R. L. (Editor)

    1981-01-01

    A large passive microwave radiometer spacecraft with near all weather capability of monitoring soil moisture for global crop forecasting was designed. The design, emphasizing large space structures technology, characterized the mission hardware at the conceptual level in sufficient detail to identify enabling and pacing technologies. Mission and spacecraft requirements, design and structural concepts, electromagnetic concepts, and control concepts are addressed.

  3. Perceptions of Athletic Training Education Program Directors on Their Students' Persistence and Departure Decisions

    ERIC Educational Resources Information Center

    Bowman, Thomas G.

    2012-01-01

    The athletic training profession is in the midst of a large increase in demand for health care professionals for the physically active. In order to meet demand, directors of athletic training education programs (ATEPs) are challenged with providing sufficient graduates. There has been a large increase in ATEPs nationwide since educational reform…

  4. Entropy production during an isothermal phase transition in the early universe

    NASA Astrophysics Data System (ADS)

    Kaempfer, B.

    The analytical model of Lodenquai and Dixit (1983) and of Bonometto and Matarrese (1983) of an isothermal era in the early universe is extended here to arbitrary temperatures. It is found that a sufficiently large supercooling gives rise to a large entropy production which may significantly dilute the primordial monopole or baryon to entropy ratio. Whether such large supercooling can be achieved depends on the characteristics of the nucleation process.

  5. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  6. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  7. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  8. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  9. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  10. Comparison of DNA extraction methods for human gut microbial community profiling.

    PubMed

    Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do

    2018-03-01

    The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.

  11. Response of Heterogeneous and Fractured Carbonate Samples to CO2-Brine Exposure

    NASA Astrophysics Data System (ADS)

    Smith, M. M.; Mason, H. E.; Hao, Y.; Carroll, S.

    2014-12-01

    Carbonate rock units are often considered as candidate sites for storage of carbon dioxide (CO2), whether as stand-alone reservoirs or coupled with enhanced oil recovery efforts. In order to accept injected carbon dioxide, carbonate reservoirs must either possess sufficient preexisting connected void space, or react with CO2-acidified fluids to produce more pore space and improve permeability. However, upward migration of CO2 through barrier zones or seal layers must be minimized for effective safe storage. Therefore, prediction of the changes to porosity and permeability in these systems over time is a key component of reservoir management. Towards this goal, we present the results of several experiments on carbonate core samples from the Wellington, Kansas 1-32 well, conducted under reservoir temperature, pressure, and CO2 conditions. These samples were imaged by X-ray computed tomography (XRCT) and analyzed with nuclear magnetic resonance (NMR) spectroscopy both prior to and after reaction with CO2-enriched brines. The carbonate samples each displayed distinct responses to CO2 exposure in terms of permeability change with time and relative abundance of calcite versus dolomite dissolution. The measured permeability of each sample was also much lower than that estimated by downhole NMR logging, with samples with larger fractured regions possessing higher permeability values. We present also our modeling approach and preliminary simulation results for a specific sample from the targeted injection zone. The heterogeneous composition as well as the presence of large fractured zones within the rock necessitated the use of a nested three-region approach to represent the range of void space observed via tomography. Currently, the physical response to CO2-brine flow (i.e., pressure declines with time) is reproduced well but the extent of chemical reaction is overestimated by the model.

  12. Development of the Potassium-Argon Laser Experiment (KArLE) Instrument for In Situ Geochronology

    NASA Technical Reports Server (NTRS)

    Cohen, Barbara A.; Li, Z.-H.; Miller, J. S.; Brinckerhoff, W. B.; Clegg, S. M.; Mahaffy, P. R.; Swindle, T. D.; Wiens, R. C.

    2012-01-01

    Absolute dating of planetary samples is an essential tool to establish the chronology of geological events, including crystallization history, magmatic evolution, and alteration. Traditionally, geochronology has only been accomplishable on samples from dedicated sample return missions or meteorites. The capability for in situ geochronology is highly desired, because it will allow one-way planetary missions to perform dating of large numbers of samples. The success of an in situ geochronology package will not only yield data on absolute ages, but can also complement sample return missions by identifying the most interesting rocks to cache and/or return to Earth. In situ dating instruments have been proposed, but none have yet reached TRL 6 because the required high-resolution isotopic measurements are very challenging. Our team is now addressing this challenge by developing the Potassium (K) - Argon Laser Experiment (KArLE) under the NASA Planetary Instrument Definition and Development Program (PIDDP), building on previous work to develop a K-Ar in situ instrument [1]. KArLE uses a combination of several flight-proven components that enable accurate K-Ar isochron dating of planetary rocks. KArLE will ablate a rock sample, determine the K in the plasma state using laser-induced breakdown spectroscopy (LIBS), measure the liberated Ar using quadrupole mass spectrometry (QMS), and relate the two by the volume of the ablated pit using an optical method such as a vertical scanning interferometer (VSI). Our preliminary work indicates that the KArLE instrument will be capable of determining the age of several kinds of planetary samples to +/-100 Myr, sufficient to address a wide range of geochronology problems in planetary science.

  13. Application of EMA-qPCR as a complementary tool for the detection and monitoring of Legionella in different water systems.

    PubMed

    Qin, Tian; Tian, Zhengan; Ren, Hongyu; Hu, Guangchun; Zhou, Haijian; Lu, Jinxing; Luo, Chengwang; Liu, Zunyu; Shao, Zhujun

    2012-05-01

    Legionella are prevalent in human-made water systems and cause legionellosis in humans. Conventional culturing and polymerase chain reaction (PCR) techniques are not sufficiently accurate for the quantitative analysis of live Legionella bacteria in water samples because of the presence of viable but nonculturable cells and dead cells. Here, we report a rapid detection method for viable Legionella that combines ethidium monoazide (EMA) with quantitative real-time PCR (qPCR) and apply this method to detect Legionella in a large number of water samples from different sources. Results yielded that samples treated with 5 μg/ml EMA for 10 min and subsequently exposed to light irradiation for 5 min were optimal for detecting Legionella. EMA treatment before qPCR could block the signal from approximately 4 log(10) of dead cells. When investigating environmental water samples, the percent-positive rate obtained by EMA-qPCR was significantly higher than conventional PCR and culture methods, and slightly lower than qPCR. The bacterial count of Legionella determined by EMA-qPCR were mostly greater than those determined by culture assays and lower than those determined by qPCR. Acceptable correlations were found between the EMA-qPCR and qPCR results for cooling towers, piped water and hot spring water samples (r = 0.849, P < 0.001) and also found between the EMA-qPCR and culture results for hot spring water samples (r = 0.698, P < 0.001). The results indicate that EMA-qPCR could be used as a complementary tool for the detection and monitoring of Legionella in water systems, especially in hot spring water samples.

  14. Sex-specific lateralization of event-related potential effects during mental rotation of polygons.

    PubMed

    Pellkofer, Julia; Jansen, Petra; Heil, Martin

    2014-08-06

    Mental rotation performance has been found to produce one of the largest sex differences in cognition. Many theories suggest that this effect should be accompanied by a sex difference in functional cerebral asymmetry, but empirical data are more than equivocal probably because of (a) the use of inappropriate stimuli and (b) insufficient power of most neurophysiological studies. Therefore, sex differences in mental rotation of polygons were investigated in 122 adults. Men outperformed women on mental rotation speed (as well as on response time and accuracy). On the basis of the electrophysiological brain correlates of mental rotation, we observed a bilateral brain activity for men, whereas women's brain activity was clearly lateralized toward the left hemisphere if and only if mental rotation was involved. Thus, sex differences in functional cerebral asymmetry can indeed be observed if appropriate stimuli are used in a sufficiently large sample.

  15. Gonadal and Sex Differentiation Abnormalities of Dogs and Cats

    PubMed Central

    Meyers-Wallen, V.N.

    2012-01-01

    The molecular steps in normal sexual development were largely discovered by studying patients and animal models with disorders of sexual development (DSD). Although several types of DSD have been reported in the cat and dog, which are often strikingly similar to human DSD, these have been infrequently utilized to contribute to our knowledge of mammalian sexual development. Canine and feline cases of DSD with sufficient evidence to be considered as potential models are summarized in this report. The consensus DSD terminology, and reference to previous terminology, is used to foster adoption of a common nomenclature that will facilitate communication and collaboration between veterinarians, physicians, and researchers. To efficiently utilize these unique resources as molecular tools continue to improve, it will be helpful to deposit samples from valuable cases into repositories where they are available to contribute to our understanding of sexual development, and thus improve human and animal health. PMID:22005097

  16. Bennett ion mass spectrometers on the Pioneer Venus Bus and Orbiter

    NASA Technical Reports Server (NTRS)

    Taylor, H. A., Jr.; Brinton, H. C.; Wagner, T. C. G.; Blackwell, B. H.; Cordier, G. R.

    1980-01-01

    Identical Bennett radio-frequency ion mass spectrometer instruments on the Pioneer Venus Bus and Orbiter have provided the first in-situ measurements of the detailed composition of the planet's ionosphere. The sensitivity, resolution, and dynamic range are sufficient to provide measurements of the solar-wind-induced bow-shock, the ionopause, and highly structured distributions of up to 16 thermal ion species within the ionosphere. The use of adaptive scan and detection circuits and servo-controlled logic for ion mass and energy analysis permits detection of ion concentrations as low as 5 ions/cu cm and ion flow velocities as large as 9 km/sec for O(+). A variety of commandable modes provides ion sampling rates ranging from 0.1 to 1.6 sec between measurements of a single constituent. A lightweight sensor and electronics housing are features of a compact instrument package.

  17. An automated data management/analysis system for space shuttle orbiter tiles. [stress analysis

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Ballas, M.

    1982-01-01

    An engineering data management system was combined with a nonlinear stress analysis program to provide a capability for analyzing a large number of tiles on the space shuttle orbiter. Tile geometry data and all data necessary of define the tile loads environment accessed automatically as needed for the analysis of a particular tile or a set of tiles. User documentation provided includes: (1) description of computer programs and data files contained in the system; (2) definitions of all engineering data stored in the data base; (3) characteristics of the tile anaytical model; (4) instructions for preparation of user input; and (5) a sample problem to illustrate use of the system. Description of data, computer programs, and analytical models of the tile are sufficiently detailed to guide extension of the system to include additional zones of tiles and/or additional types of analyses

  18. Continuous composition-spread thin films of transition metal oxides by pulsed-laser deposition

    NASA Astrophysics Data System (ADS)

    Ohkubo, I.; Christen, H. M.; Khalifah, P.; Sathyamurthy, S.; Zhai, H. Y.; Rouleau, C. M.; Mandrus, D. G.; Lowndes, D. H.

    2004-02-01

    We have designed an improved pulsed-laser deposition-continuous composition-spread (PLD-CCS) system that overcomes the difficulties associated with earlier related techniques. Our new PLD-CCS system is based on a precisely controlled synchronization between the laser firing, target exchange, and substrate translation/rotation, and offers more flexibility and control than earlier PLD-based approaches. Most importantly, the deposition energetics and the film thickness are kept constant across the entire composition range, and the resulting samples are sufficiently large to allow characterization by conventional techniques. We fabricated binary alloy composition-spread films composed of SrRuO 3 and CaRuO 3. Alternating ablation from two different ceramic targets leads to in situ alloy formation, and the value of x in Sr xCa x-1 RuO 3 can be changed linearly from 0 to 1 (or over any arbitrarily smaller range) along one direction of the substrate.

  19. Adjustable 3-D structure with enhanced interfaces and junctions towards microwave response using FeCo/C core-shell nanocomposites.

    PubMed

    Li, Daoran; Liang, Xiaohui; Liu, Wei; Ma, Jianna; Zhang, Yanan; Ji, Guangbin; Meng, Wei

    2017-12-01

    In this work, the 3-D honeycomb-like FeCo/C nanocomposites were synthesized through the carbon thermal reduction under an inert atmosphere. The enhanced microwave absorption properties of the composites were mainly attributed to the unique three dimensional structure of the FeCo/C nanocomposites, abundant interfaces and junctions, and the appropriate impedance matching. The Cole-Cole semicircles proved the sufficient dielectric relaxation process. The sample calcinated at 600°C for 4h showed the best microwave absorption properties. A maximum reflection loss of -54.6dB was achieved at 10.8GHz with a thickness of 2.3mm and the frequency bandwidth was as large as 5.3GHz. The results showed that the as-prepared FeCo/C nanocomposite could be a potential candidate for microwave absorption. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. An Approach to In-Situ Observations of Volcanic Plumes

    NASA Technical Reports Server (NTRS)

    Smythe, W. D.; Lopes, M. C.; Pieri, D. C.; Hall, J. L.

    2005-01-01

    Volcanoes have long been recognized as playing a dominant role in the birth, and possibly the death, of biological populations. They are possible sources of primordial gases, provide conditions sufficient for creating amino acids, strongly affect the heat balance in the atmosphere, and have been shown to sustain life (in oceanic vents.) Eruptions can have profound effects on local flora and fauna, and for very large eruptions, may alter global weather patterns and cause entire species to fail. Measurements of particulates, gases, and dynamics within a volcanic plume are critical to understanding both how volcanoes work and how plumes affect populations, environment, and aviation. Volcanic plumes and associated eruption columns are a miasma of toxic gases, corrosive condensates, and abrasive particulates that makes them hazardous to nearby populations and poses a significant risk to all forms of aviation. Plumes also provide a mechanism for sampling the volcanic interior, which, for hydrothermal environments, may host unique biological populations.

Top